Feb 04 08:41:28 crc systemd[1]: Starting Kubernetes Kubelet... Feb 04 08:41:28 crc restorecon[4582]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:28 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 08:41:29 crc restorecon[4582]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 04 08:41:30 crc kubenswrapper[4644]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.386915 4644 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392234 4644 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392268 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392279 4644 feature_gate.go:330] unrecognized feature gate: Example Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392288 4644 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392296 4644 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392306 4644 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392314 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392359 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392367 4644 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392376 4644 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392384 4644 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392392 4644 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392400 4644 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392408 4644 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392415 4644 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392423 4644 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392432 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392439 4644 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392448 4644 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392456 4644 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392468 4644 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392480 4644 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392491 4644 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392501 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392510 4644 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392518 4644 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392525 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392535 4644 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392542 4644 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392550 4644 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392558 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392566 4644 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392574 4644 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392583 4644 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392591 4644 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392599 4644 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392606 4644 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392615 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392623 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392656 4644 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392666 4644 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392674 4644 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392685 4644 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392695 4644 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392706 4644 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392714 4644 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392722 4644 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392730 4644 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392738 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392748 4644 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392756 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392765 4644 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392772 4644 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392783 4644 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392793 4644 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392803 4644 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392811 4644 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392820 4644 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392829 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392837 4644 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392848 4644 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392858 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392868 4644 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392879 4644 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392890 4644 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392898 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392906 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392916 4644 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392925 4644 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392933 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.392941 4644 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393154 4644 flags.go:64] FLAG: --address="0.0.0.0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393173 4644 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393190 4644 flags.go:64] FLAG: --anonymous-auth="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393203 4644 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393215 4644 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393227 4644 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393240 4644 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393253 4644 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393263 4644 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393272 4644 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393282 4644 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393293 4644 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393302 4644 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393312 4644 flags.go:64] FLAG: --cgroup-root="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393321 4644 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393358 4644 flags.go:64] FLAG: --client-ca-file="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393366 4644 flags.go:64] FLAG: --cloud-config="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393375 4644 flags.go:64] FLAG: --cloud-provider="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393384 4644 flags.go:64] FLAG: --cluster-dns="[]" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393396 4644 flags.go:64] FLAG: --cluster-domain="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393405 4644 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393415 4644 flags.go:64] FLAG: --config-dir="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393424 4644 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393435 4644 flags.go:64] FLAG: --container-log-max-files="5" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393448 4644 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393457 4644 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393467 4644 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393477 4644 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393487 4644 flags.go:64] FLAG: --contention-profiling="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393496 4644 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393505 4644 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393515 4644 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393525 4644 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393542 4644 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393551 4644 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393561 4644 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393570 4644 flags.go:64] FLAG: --enable-load-reader="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393579 4644 flags.go:64] FLAG: --enable-server="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393589 4644 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393603 4644 flags.go:64] FLAG: --event-burst="100" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393613 4644 flags.go:64] FLAG: --event-qps="50" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393623 4644 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393632 4644 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393641 4644 flags.go:64] FLAG: --eviction-hard="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393653 4644 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393662 4644 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393672 4644 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393681 4644 flags.go:64] FLAG: --eviction-soft="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393690 4644 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393699 4644 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393709 4644 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393718 4644 flags.go:64] FLAG: --experimental-mounter-path="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393727 4644 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393736 4644 flags.go:64] FLAG: --fail-swap-on="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393762 4644 flags.go:64] FLAG: --feature-gates="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393774 4644 flags.go:64] FLAG: --file-check-frequency="20s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393784 4644 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393793 4644 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393803 4644 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393812 4644 flags.go:64] FLAG: --healthz-port="10248" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393823 4644 flags.go:64] FLAG: --help="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393832 4644 flags.go:64] FLAG: --hostname-override="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393841 4644 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393851 4644 flags.go:64] FLAG: --http-check-frequency="20s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393860 4644 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393871 4644 flags.go:64] FLAG: --image-credential-provider-config="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393881 4644 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393891 4644 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393902 4644 flags.go:64] FLAG: --image-service-endpoint="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393912 4644 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393921 4644 flags.go:64] FLAG: --kube-api-burst="100" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393932 4644 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393942 4644 flags.go:64] FLAG: --kube-api-qps="50" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393953 4644 flags.go:64] FLAG: --kube-reserved="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393962 4644 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393971 4644 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393981 4644 flags.go:64] FLAG: --kubelet-cgroups="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393989 4644 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.393999 4644 flags.go:64] FLAG: --lock-file="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394008 4644 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394017 4644 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394027 4644 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394041 4644 flags.go:64] FLAG: --log-json-split-stream="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394050 4644 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394059 4644 flags.go:64] FLAG: --log-text-split-stream="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394068 4644 flags.go:64] FLAG: --logging-format="text" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394077 4644 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394087 4644 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394096 4644 flags.go:64] FLAG: --manifest-url="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394105 4644 flags.go:64] FLAG: --manifest-url-header="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394117 4644 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394127 4644 flags.go:64] FLAG: --max-open-files="1000000" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394139 4644 flags.go:64] FLAG: --max-pods="110" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394148 4644 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394158 4644 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394167 4644 flags.go:64] FLAG: --memory-manager-policy="None" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394176 4644 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394185 4644 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394195 4644 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394204 4644 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394225 4644 flags.go:64] FLAG: --node-status-max-images="50" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394234 4644 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394245 4644 flags.go:64] FLAG: --oom-score-adj="-999" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394254 4644 flags.go:64] FLAG: --pod-cidr="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394264 4644 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394278 4644 flags.go:64] FLAG: --pod-manifest-path="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394288 4644 flags.go:64] FLAG: --pod-max-pids="-1" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394297 4644 flags.go:64] FLAG: --pods-per-core="0" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394306 4644 flags.go:64] FLAG: --port="10250" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394315 4644 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394347 4644 flags.go:64] FLAG: --provider-id="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394356 4644 flags.go:64] FLAG: --qos-reserved="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394366 4644 flags.go:64] FLAG: --read-only-port="10255" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394376 4644 flags.go:64] FLAG: --register-node="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394386 4644 flags.go:64] FLAG: --register-schedulable="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394396 4644 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394411 4644 flags.go:64] FLAG: --registry-burst="10" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394421 4644 flags.go:64] FLAG: --registry-qps="5" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394430 4644 flags.go:64] FLAG: --reserved-cpus="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394439 4644 flags.go:64] FLAG: --reserved-memory="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394451 4644 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394460 4644 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394469 4644 flags.go:64] FLAG: --rotate-certificates="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394479 4644 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394488 4644 flags.go:64] FLAG: --runonce="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394497 4644 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394506 4644 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394516 4644 flags.go:64] FLAG: --seccomp-default="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394525 4644 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394534 4644 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394544 4644 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394560 4644 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394569 4644 flags.go:64] FLAG: --storage-driver-password="root" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394581 4644 flags.go:64] FLAG: --storage-driver-secure="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394590 4644 flags.go:64] FLAG: --storage-driver-table="stats" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394600 4644 flags.go:64] FLAG: --storage-driver-user="root" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394609 4644 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394619 4644 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394629 4644 flags.go:64] FLAG: --system-cgroups="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394638 4644 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394652 4644 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394661 4644 flags.go:64] FLAG: --tls-cert-file="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394670 4644 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394682 4644 flags.go:64] FLAG: --tls-min-version="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394691 4644 flags.go:64] FLAG: --tls-private-key-file="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394701 4644 flags.go:64] FLAG: --topology-manager-policy="none" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394710 4644 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394719 4644 flags.go:64] FLAG: --topology-manager-scope="container" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394729 4644 flags.go:64] FLAG: --v="2" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394741 4644 flags.go:64] FLAG: --version="false" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394753 4644 flags.go:64] FLAG: --vmodule="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394765 4644 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.394775 4644 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.394993 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395006 4644 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395016 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395024 4644 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395034 4644 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395042 4644 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395053 4644 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395062 4644 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395071 4644 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395080 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395090 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395098 4644 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395106 4644 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395114 4644 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395122 4644 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395130 4644 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395142 4644 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395151 4644 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395159 4644 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395167 4644 feature_gate.go:330] unrecognized feature gate: Example Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395175 4644 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395183 4644 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395192 4644 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395206 4644 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395214 4644 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395222 4644 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395229 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395237 4644 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395245 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395253 4644 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395261 4644 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395269 4644 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395277 4644 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395286 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395294 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395304 4644 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395314 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395346 4644 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395356 4644 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395365 4644 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395376 4644 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395385 4644 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395395 4644 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395403 4644 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395412 4644 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395420 4644 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395428 4644 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395436 4644 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395444 4644 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395452 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395460 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395468 4644 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395476 4644 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395484 4644 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395492 4644 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395500 4644 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395507 4644 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395515 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395524 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395534 4644 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395542 4644 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395551 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395559 4644 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395568 4644 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395576 4644 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395584 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395593 4644 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395601 4644 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395609 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395617 4644 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.395627 4644 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.395642 4644 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.408040 4644 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.408080 4644 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408191 4644 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408212 4644 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408221 4644 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408230 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408240 4644 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408248 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408257 4644 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408267 4644 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408279 4644 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408288 4644 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408297 4644 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408306 4644 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408314 4644 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408324 4644 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408358 4644 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408366 4644 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408375 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408385 4644 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408394 4644 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408402 4644 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408411 4644 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408419 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408426 4644 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408435 4644 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408443 4644 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408451 4644 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408459 4644 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408466 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408477 4644 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408486 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408494 4644 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408502 4644 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408510 4644 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408518 4644 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408527 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408535 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408543 4644 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408551 4644 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408559 4644 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408567 4644 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408575 4644 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408583 4644 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408590 4644 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408598 4644 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408607 4644 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408615 4644 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408623 4644 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408633 4644 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408671 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408681 4644 feature_gate.go:330] unrecognized feature gate: Example Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408691 4644 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408699 4644 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408707 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408715 4644 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408724 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408731 4644 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408740 4644 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408749 4644 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408757 4644 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408766 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408774 4644 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408782 4644 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408792 4644 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408802 4644 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408810 4644 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408819 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408827 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408835 4644 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408843 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408851 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.408861 4644 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.408873 4644 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409104 4644 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409119 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409128 4644 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409136 4644 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409145 4644 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409155 4644 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409163 4644 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409171 4644 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409181 4644 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409191 4644 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409202 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409211 4644 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409219 4644 feature_gate.go:330] unrecognized feature gate: Example Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409228 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409235 4644 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409243 4644 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409251 4644 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409261 4644 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409272 4644 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409283 4644 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409292 4644 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409302 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409311 4644 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409320 4644 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409353 4644 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409361 4644 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409369 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409377 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409384 4644 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409393 4644 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409401 4644 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409409 4644 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409416 4644 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409424 4644 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409434 4644 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409442 4644 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409450 4644 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409458 4644 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409466 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409474 4644 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409481 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409489 4644 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409497 4644 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409505 4644 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409513 4644 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409521 4644 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409530 4644 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409537 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409545 4644 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409553 4644 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409561 4644 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409568 4644 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409576 4644 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409584 4644 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409592 4644 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409600 4644 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409608 4644 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409615 4644 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409623 4644 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409630 4644 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409638 4644 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409648 4644 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409657 4644 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409667 4644 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409675 4644 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409684 4644 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409692 4644 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409703 4644 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409713 4644 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409721 4644 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.409732 4644 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.409744 4644 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.409957 4644 server.go:940] "Client rotation is on, will bootstrap in background" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.415726 4644 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.415866 4644 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.418414 4644 server.go:997] "Starting client certificate rotation" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.418494 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.418779 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-01 03:12:02.746073997 +0000 UTC Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.418878 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.442164 4644 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.446543 4644 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.447882 4644 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.464474 4644 log.go:25] "Validated CRI v1 runtime API" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.507372 4644 log.go:25] "Validated CRI v1 image API" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.511533 4644 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.518963 4644 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-04-08-35-41-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.519021 4644 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.540556 4644 manager.go:217] Machine: {Timestamp:2026-02-04 08:41:30.537972353 +0000 UTC m=+0.578030188 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199484928 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:48850853-7009-48fc-9774-1a351e978855 BootID:4deccf2e-d791-4944-9e8f-0b83ba83be33 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599742464 Type:vfs Inodes:3076109 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076109 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b3:1f:c9 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b3:1f:c9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e7:a6:ce Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4e:fa:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e9:0b:00 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:69:88:0b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:c1:e8:32:a5:41 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:13:83:cd:5f:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199484928 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.540933 4644 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.541124 4644 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.543691 4644 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.544078 4644 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.544138 4644 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.544635 4644 topology_manager.go:138] "Creating topology manager with none policy" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.544657 4644 container_manager_linux.go:303] "Creating device plugin manager" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.545448 4644 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.545508 4644 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.545744 4644 state_mem.go:36] "Initialized new in-memory state store" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.545897 4644 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.550109 4644 kubelet.go:418] "Attempting to sync node with API server" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.550162 4644 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.550227 4644 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.550254 4644 kubelet.go:324] "Adding apiserver pod source" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.550282 4644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.555238 4644 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.555430 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.555421 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.555606 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.555605 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.557194 4644 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.561244 4644 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563522 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563592 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563612 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563627 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563652 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563666 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563681 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563703 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563721 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563739 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563761 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.563775 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.567401 4644 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.568220 4644 server.go:1280] "Started kubelet" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.570556 4644 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.570555 4644 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.571607 4644 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.571761 4644 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.573631 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.573697 4644 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.573734 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:24:03.537403486 +0000 UTC Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.574463 4644 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.574503 4644 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.574702 4644 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.575669 4644 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.576540 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.576624 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.577798 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.578297 4644 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890fe82a1ab6476 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 08:41:30.568172662 +0000 UTC m=+0.608230457,LastTimestamp:2026-02-04 08:41:30.568172662 +0000 UTC m=+0.608230457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.584159 4644 server.go:460] "Adding debug handlers to kubelet server" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587264 4644 factory.go:153] Registering CRI-O factory Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587316 4644 factory.go:221] Registration of the crio container factory successfully Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587547 4644 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587579 4644 factory.go:55] Registering systemd factory Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587596 4644 factory.go:221] Registration of the systemd container factory successfully Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587638 4644 factory.go:103] Registering Raw factory Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.587670 4644 manager.go:1196] Started watching for new ooms in manager Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.591205 4644 manager.go:319] Starting recovery of all containers Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.599862 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.599978 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.599996 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600022 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600039 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600056 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600077 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600094 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600118 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600132 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600148 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600170 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600186 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600215 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600232 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600253 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600269 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600292 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600306 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600338 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600364 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600379 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600401 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600416 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600431 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600451 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600473 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600497 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600516 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600531 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600544 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600565 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600581 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600599 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600615 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600630 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600648 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600663 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600681 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600698 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600711 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600730 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600746 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600761 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600786 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600823 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600840 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600854 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600868 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600885 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600934 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600953 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.600978 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601038 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601062 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601087 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601107 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601126 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601150 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601170 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601195 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601214 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601230 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601253 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601272 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601296 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601313 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601377 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601403 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601422 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601439 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601463 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601482 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601728 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601760 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601777 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601801 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601821 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601846 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601865 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601886 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601914 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601933 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601960 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.601986 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602004 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602026 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602043 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602069 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602090 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602109 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602134 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602150 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602207 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602226 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602244 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602270 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.602301 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.603664 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.603697 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612022 4644 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612169 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612206 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612240 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612270 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612367 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612409 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612441 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612473 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612504 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612539 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612568 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612594 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612619 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612649 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612677 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612704 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612737 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612765 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612795 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612853 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612880 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612905 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612933 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612958 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.612982 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613009 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613034 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613059 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613083 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613135 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613161 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613187 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613211 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613241 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613266 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613292 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613316 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613482 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613521 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613552 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613579 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613604 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613633 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613658 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613682 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613708 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613734 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613759 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613782 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613809 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613836 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613853 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613869 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613887 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613906 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613922 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613940 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613958 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613975 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.613992 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614020 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614038 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614056 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614076 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614096 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614115 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614132 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614149 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614166 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614184 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614201 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614218 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614235 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614253 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614270 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614287 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614304 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614321 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614365 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614384 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614399 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614419 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614435 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614451 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614468 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614486 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614501 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614518 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614534 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614552 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614569 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614584 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614601 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614618 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614636 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614653 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614669 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614688 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614705 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614723 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614767 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614784 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614801 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614819 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614838 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614855 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614874 4644 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614891 4644 reconstruct.go:97] "Volume reconstruction finished" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.614903 4644 reconciler.go:26] "Reconciler: start to sync state" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.634424 4644 manager.go:324] Recovery completed Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.646320 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.647991 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.648029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.648038 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.649144 4644 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.649158 4644 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.649179 4644 state_mem.go:36] "Initialized new in-memory state store" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.656545 4644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.658478 4644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.658516 4644 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.658552 4644 kubelet.go:2335] "Starting kubelet main sync loop" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.658602 4644 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 04 08:41:30 crc kubenswrapper[4644]: W0204 08:41:30.661571 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.661653 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.664781 4644 policy_none.go:49] "None policy: Start" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.665731 4644 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.665773 4644 state_mem.go:35] "Initializing new in-memory state store" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.676105 4644 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.709664 4644 manager.go:334] "Starting Device Plugin manager" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.709828 4644 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.709857 4644 server.go:79] "Starting device plugin registration server" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.710690 4644 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.710719 4644 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.711592 4644 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.711733 4644 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.711755 4644 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.723682 4644 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.758763 4644 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.758904 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760321 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760561 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760816 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.760870 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.763650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764080 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764104 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764770 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764801 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.764910 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.765015 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767542 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767634 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767665 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767680 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.767846 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.768117 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.768155 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.768817 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.768862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.768872 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.769058 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.769190 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.769240 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770086 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770128 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770415 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770466 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770504 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770477 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770542 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.770553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.771380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.771419 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.771430 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.778664 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.814262 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.817217 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.817280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.817305 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.817380 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:30 crc kubenswrapper[4644]: E0204 08:41:30.818022 4644 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818105 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818177 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818231 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818279 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818365 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818441 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818485 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818530 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818623 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818696 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818747 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818799 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818842 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818908 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.818983 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919735 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919820 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919871 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919907 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919941 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.919977 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920007 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920045 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920073 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920102 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920126 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920150 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920170 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920162 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920220 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920163 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920257 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920110 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920298 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920190 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920378 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920453 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920472 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920470 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920499 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920534 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920583 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920593 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920688 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:30 crc kubenswrapper[4644]: I0204 08:41:30.920734 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.019094 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.021123 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.021187 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.021207 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.021244 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.021969 4644 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.109743 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.117802 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.146478 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.153702 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.158359 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.170460 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6f7ecc4b1c083566b5c9a85c1f0ea7b280db40f51e054a03a4b72bfaa58df153 WatchSource:0}: Error finding container 6f7ecc4b1c083566b5c9a85c1f0ea7b280db40f51e054a03a4b72bfaa58df153: Status 404 returned error can't find the container with id 6f7ecc4b1c083566b5c9a85c1f0ea7b280db40f51e054a03a4b72bfaa58df153 Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.179388 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.183748 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-25500d36b93113b68fabc9f93409c066ebbd04bbb5b929ece97e2a987b8d78df WatchSource:0}: Error finding container 25500d36b93113b68fabc9f93409c066ebbd04bbb5b929ece97e2a987b8d78df: Status 404 returned error can't find the container with id 25500d36b93113b68fabc9f93409c066ebbd04bbb5b929ece97e2a987b8d78df Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.186457 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cc0c0941b7617f5adb28b2103f4c616a85f0ee53849c6d7ff8436a899766c39b WatchSource:0}: Error finding container cc0c0941b7617f5adb28b2103f4c616a85f0ee53849c6d7ff8436a899766c39b: Status 404 returned error can't find the container with id cc0c0941b7617f5adb28b2103f4c616a85f0ee53849c6d7ff8436a899766c39b Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.194548 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-07387fb189143cbd603dae23f741e6be89b6fbed4e44383ae9709c50ec6b0f1d WatchSource:0}: Error finding container 07387fb189143cbd603dae23f741e6be89b6fbed4e44383ae9709c50ec6b0f1d: Status 404 returned error can't find the container with id 07387fb189143cbd603dae23f741e6be89b6fbed4e44383ae9709c50ec6b0f1d Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.423094 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.426162 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.426214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.426229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.426259 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.427210 4644 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.573148 4644 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.574082 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:14:31.321392037 +0000 UTC Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.640667 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.640770 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.664396 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"25500d36b93113b68fabc9f93409c066ebbd04bbb5b929ece97e2a987b8d78df"} Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.665538 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"42a9f9845751e17b15db313dc3f46a26f0f751f6ed7fe095b35e936d73b48050"} Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.666424 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f7ecc4b1c083566b5c9a85c1f0ea7b280db40f51e054a03a4b72bfaa58df153"} Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.667140 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"07387fb189143cbd603dae23f741e6be89b6fbed4e44383ae9709c50ec6b0f1d"} Feb 04 08:41:31 crc kubenswrapper[4644]: I0204 08:41:31.667810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc0c0941b7617f5adb28b2103f4c616a85f0ee53849c6d7ff8436a899766c39b"} Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.700378 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.700457 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:31 crc kubenswrapper[4644]: W0204 08:41:31.710509 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.710597 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:31 crc kubenswrapper[4644]: E0204 08:41:31.981020 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Feb 04 08:41:32 crc kubenswrapper[4644]: W0204 08:41:32.069294 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:32 crc kubenswrapper[4644]: E0204 08:41:32.069546 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.227290 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.231117 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.231178 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.231200 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.231237 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:32 crc kubenswrapper[4644]: E0204 08:41:32.231965 4644 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.548531 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 08:41:32 crc kubenswrapper[4644]: E0204 08:41:32.550422 4644 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.573203 4644 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.574224 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:17:25.227240278 +0000 UTC Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.672762 4644 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf" exitCode=0 Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.672852 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.673023 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.674021 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.674069 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.674086 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.674940 4644 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62" exitCode=0 Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.675010 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.675049 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.676530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.676597 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.676621 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.678693 4644 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1" exitCode=0 Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.678778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.678956 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.680250 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.680304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.680317 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.687061 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.687061 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.687226 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.687258 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.687273 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.688970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.689004 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.689021 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.690982 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311" exitCode=0 Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.691039 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311"} Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.691179 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.692563 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.692603 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.692624 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.695925 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.696897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.696928 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:32 crc kubenswrapper[4644]: I0204 08:41:32.696938 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.573033 4644 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.575054 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:03:05.002807005 +0000 UTC Feb 04 08:41:33 crc kubenswrapper[4644]: E0204 08:41:33.581663 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Feb 04 08:41:33 crc kubenswrapper[4644]: W0204 08:41:33.625432 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:33 crc kubenswrapper[4644]: E0204 08:41:33.625516 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:33 crc kubenswrapper[4644]: W0204 08:41:33.673671 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:33 crc kubenswrapper[4644]: E0204 08:41:33.673747 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.695443 4644 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371" exitCode=0 Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.695512 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.695583 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.696845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.696872 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.696881 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.697507 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.697515 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704172 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704214 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704215 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704353 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704386 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704395 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704888 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704906 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.704914 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.714988 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715307 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715370 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715382 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715391 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b"} Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.715897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.832874 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.835743 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.835769 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.835778 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:33 crc kubenswrapper[4644]: I0204 08:41:33.835797 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:33 crc kubenswrapper[4644]: E0204 08:41:33.836202 4644 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Feb 04 08:41:33 crc kubenswrapper[4644]: W0204 08:41:33.946970 4644 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:41:33 crc kubenswrapper[4644]: E0204 08:41:33.947053 4644 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.575945 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:02:41.116704666 +0000 UTC Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.719913 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.734032 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b" exitCode=255 Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.734175 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.734175 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b"} Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.735596 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.735631 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.735643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.736100 4644 scope.go:117] "RemoveContainer" containerID="684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738817 4644 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07" exitCode=0 Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738904 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738922 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07"} Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738960 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738976 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.738935 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740420 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740746 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740763 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740867 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:34 crc kubenswrapper[4644]: I0204 08:41:34.740886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.008496 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.576558 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:26:53.597466755 +0000 UTC Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.743669 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.745825 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.746016 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.746067 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.747548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.747657 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.747691 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749239 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749282 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749303 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749316 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749354 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.749353 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072"} Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.750103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.750149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:35 crc kubenswrapper[4644]: I0204 08:41:35.750166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.577095 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:38:11.535057594 +0000 UTC Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.752016 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.752071 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.752224 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753625 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753715 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753724 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753756 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.753768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:36 crc kubenswrapper[4644]: I0204 08:41:36.874994 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.037270 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.038973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.039042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.039078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.039124 4644 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.050264 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.050522 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.051789 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.051845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.051869 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.059685 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.577907 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:15:23.305649781 +0000 UTC Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.755406 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.755407 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.756687 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.756739 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.756756 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.757522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.757566 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:37 crc kubenswrapper[4644]: I0204 08:41:37.757585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:38 crc kubenswrapper[4644]: I0204 08:41:38.578293 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:41:49.621307164 +0000 UTC Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.032250 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.032576 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.034280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.034386 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.034416 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.337316 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.337728 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.339477 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.339534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.339551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:39 crc kubenswrapper[4644]: I0204 08:41:39.578848 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:05:12.116771233 +0000 UTC Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.113116 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.113458 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.115033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.115107 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.115129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.326874 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.327158 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.328770 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.328814 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.328836 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:40 crc kubenswrapper[4644]: I0204 08:41:40.579805 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:15:17.830727809 +0000 UTC Feb 04 08:41:40 crc kubenswrapper[4644]: E0204 08:41:40.724674 4644 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 04 08:41:41 crc kubenswrapper[4644]: I0204 08:41:41.580321 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:53:15.991677645 +0000 UTC Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.122694 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.123183 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.124912 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.124959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.124972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.128913 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.580452 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:09:38.574520977 +0000 UTC Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.685170 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.771375 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.772657 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.772834 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:42 crc kubenswrapper[4644]: I0204 08:41:42.772982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:43 crc kubenswrapper[4644]: I0204 08:41:43.581181 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:21:07.650987625 +0000 UTC Feb 04 08:41:43 crc kubenswrapper[4644]: I0204 08:41:43.773454 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:43 crc kubenswrapper[4644]: I0204 08:41:43.774105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:43 crc kubenswrapper[4644]: I0204 08:41:43.774132 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:43 crc kubenswrapper[4644]: I0204 08:41:43.774144 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.167788 4644 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.167873 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.176609 4644 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.176663 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.582497 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:43:59.957704627 +0000 UTC Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.702731 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.703022 4644 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.704558 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.704621 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:44 crc kubenswrapper[4644]: I0204 08:41:44.704640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:45 crc kubenswrapper[4644]: I0204 08:41:45.015285 4644 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]log ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]etcd ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/generic-apiserver-start-informers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/priority-and-fairness-filter ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-apiextensions-informers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-apiextensions-controllers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/crd-informer-synced ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-system-namespaces-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 04 08:41:45 crc kubenswrapper[4644]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/bootstrap-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/start-kube-aggregator-informers ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-registration-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-discovery-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]autoregister-completion ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-openapi-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 04 08:41:45 crc kubenswrapper[4644]: livez check failed Feb 04 08:41:45 crc kubenswrapper[4644]: I0204 08:41:45.015387 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:41:45 crc kubenswrapper[4644]: I0204 08:41:45.122887 4644 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 08:41:45 crc kubenswrapper[4644]: I0204 08:41:45.122976 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 08:41:45 crc kubenswrapper[4644]: I0204 08:41:45.582829 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:47:18.322905539 +0000 UTC Feb 04 08:41:46 crc kubenswrapper[4644]: I0204 08:41:46.583073 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:40:26.326423733 +0000 UTC Feb 04 08:41:47 crc kubenswrapper[4644]: I0204 08:41:47.583273 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:03:56.082778751 +0000 UTC Feb 04 08:41:48 crc kubenswrapper[4644]: I0204 08:41:48.584458 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:02:30.302347314 +0000 UTC Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.161412 4644 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.161878 4644 trace.go:236] Trace[1482579105]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 08:41:38.864) (total time: 10297ms): Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[1482579105]: ---"Objects listed" error: 10297ms (08:41:49.161) Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[1482579105]: [10.297671507s] [10.297671507s] END Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.161906 4644 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.163369 4644 trace.go:236] Trace[1417415897]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 08:41:36.886) (total time: 12276ms): Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[1417415897]: ---"Objects listed" error: 12276ms (08:41:49.163) Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[1417415897]: [12.276364652s] [12.276364652s] END Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.163401 4644 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.165856 4644 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.169247 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.172703 4644 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.174285 4644 trace.go:236] Trace[465326906]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 08:41:34.328) (total time: 14846ms): Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[465326906]: ---"Objects listed" error: 14845ms (08:41:49.174) Feb 04 08:41:49 crc kubenswrapper[4644]: Trace[465326906]: [14.846009224s] [14.846009224s] END Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.174318 4644 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.178951 4644 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.179245 4644 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.180530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.180589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.180603 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.180621 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.180632 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.201091 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.218197 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.218236 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.218248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.218265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.218279 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.231114 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.235477 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.235522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.235534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.235551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.235563 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.244126 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.247871 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.247931 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.247950 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.247974 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.247989 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.257082 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.261746 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.261786 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.261799 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.261818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.261833 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.270623 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.270755 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.272445 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.272488 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.272501 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.272517 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.272527 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.374780 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.374824 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.374838 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.374858 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.374873 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.477084 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.477166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.477191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.477228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.477255 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.562435 4644 apiserver.go:52] "Watching apiserver" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.567701 4644 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.568175 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.568715 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.568950 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.569069 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.569125 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.569283 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.569586 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.569357 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.570019 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.570056 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.572020 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.572056 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.572226 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.572316 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.572368 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.573364 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.573548 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.573963 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.574107 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.575670 4644 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576005 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576038 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576062 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576084 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576107 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576129 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576154 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576171 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576187 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576205 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576226 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576245 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576268 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576286 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576369 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576388 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576406 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576448 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576466 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576483 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576504 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576561 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576585 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576627 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576648 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576700 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576725 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576746 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576764 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576783 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576807 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576824 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576842 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576861 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576885 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576902 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576920 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576940 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576960 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576977 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577018 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577039 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577057 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577084 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577104 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577123 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577142 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577161 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577180 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577199 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577219 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577234 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577249 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577266 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577284 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577303 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577319 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577363 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577384 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577401 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577417 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577435 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577452 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577471 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577491 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577518 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577537 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577555 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577580 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577601 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577756 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577780 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577798 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577817 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577836 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577855 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577873 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577890 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577908 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577926 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577953 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577973 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577991 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578012 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578033 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578051 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578073 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578092 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578109 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578130 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578148 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578166 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578185 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578204 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578223 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578243 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578262 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578281 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578306 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578375 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578396 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578415 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578440 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578460 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578480 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578498 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578520 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578540 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578561 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578582 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578602 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578622 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578641 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578664 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578683 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578702 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578724 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578743 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578764 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578784 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578803 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578827 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578849 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578875 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578896 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578916 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578938 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578958 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578978 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578997 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579016 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579037 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579059 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579076 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579097 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579116 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579140 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579159 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579179 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579196 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579213 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579231 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579248 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576478 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579266 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579285 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576535 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579295 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576626 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576648 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576762 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576773 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.576907 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577012 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577083 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577240 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577470 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577554 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577575 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.577619 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578189 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578480 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578509 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578760 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.578895 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579248 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579304 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579604 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579634 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579681 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579681 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579701 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579707 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579722 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579765 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579785 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579826 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579846 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579865 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579904 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579928 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579953 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.579992 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580009 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580028 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580071 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580089 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580096 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580107 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580132 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580153 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580178 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580148 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580279 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580357 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580381 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580439 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580443 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580512 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580551 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580630 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580650 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580670 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580709 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580732 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580750 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580795 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580817 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580844 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580885 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580909 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580911 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580952 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580975 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.580999 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581040 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581065 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581102 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581125 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581124 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581144 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581188 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581210 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581229 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581266 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581286 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581390 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581421 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581458 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581484 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581526 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581551 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581575 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581620 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581645 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581693 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581712 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581732 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581772 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581792 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581845 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581932 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581948 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581960 4644 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581971 4644 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582001 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582012 4644 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582026 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582036 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582046 4644 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582057 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582089 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582100 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582111 4644 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582122 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582133 4644 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582163 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585115 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585157 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585167 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585184 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585311 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.611270 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.614161 4644 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581576 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581597 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581929 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.581942 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582251 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582277 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582413 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582541 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582669 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.582754 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583005 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583183 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583447 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583635 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583764 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.583956 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.584074 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.584521 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.584585 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.584616 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.584743 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585027 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585350 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585417 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585659 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585786 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.585818 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:20:25.717450556 +0000 UTC Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586248 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.623804 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.625190 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586422 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586500 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586570 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586658 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586767 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586783 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.586954 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.587033 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.587052 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.587269 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.587558 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588165 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588384 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588391 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588442 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588491 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588693 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.588840 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.589504 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.589612 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.589720 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.589851 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598256 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598349 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598583 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598930 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598966 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.598964 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599080 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599254 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599365 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599406 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599702 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599844 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.599599 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.599942 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:41:50.099922709 +0000 UTC m=+20.139980464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.600019 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.600221 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.600259 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.600746 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601017 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601265 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601281 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601573 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601756 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601784 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.601990 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602003 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602586 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602635 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602776 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602818 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.602824 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603237 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603570 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603587 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603776 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603828 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.603896 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604014 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604451 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604449 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604498 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604507 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604764 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.604905 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.605604 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.605658 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.605855 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.606197 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.606722 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.606824 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.608031 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.608397 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.609008 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.609375 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.609578 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.610299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.610497 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.610502 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.610863 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.611142 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.611754 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.612051 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.612070 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.612798 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613185 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613199 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613352 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613508 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613546 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.614055 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.614476 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.614610 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.614956 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.615214 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.615231 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.615521 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.615801 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.615983 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.616012 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.613795 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.616718 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617253 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617317 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617397 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617519 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617753 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617841 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.617844 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.618085 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.618205 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.618813 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.618895 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619236 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619304 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619391 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619497 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619858 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619908 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.619754 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.620286 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.620381 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.620838 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.623512 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.623627 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.626944 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:50.126924167 +0000 UTC m=+20.166981922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.623662 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.627084 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:50.127075892 +0000 UTC m=+20.167133647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.628682 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.629200 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.629618 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630001 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630110 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630233 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630452 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630908 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.630922 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.632108 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.638437 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.644129 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.645424 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645516 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645565 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645587 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645712 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:50.145670248 +0000 UTC m=+20.185728193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645717 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645743 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645757 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.645821 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:50.145798081 +0000 UTC m=+20.185855836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.650404 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.652243 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.679234 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683177 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683730 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683757 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683843 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683855 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683865 4644 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683874 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683883 4644 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683893 4644 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683902 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683912 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683922 4644 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683931 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683942 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683953 4644 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683967 4644 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683980 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.683994 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684007 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684018 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684029 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684040 4644 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684049 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684060 4644 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684071 4644 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684082 4644 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684093 4644 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684105 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684115 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684125 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684135 4644 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684147 4644 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684156 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684166 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684175 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684185 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684193 4644 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684202 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684212 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684220 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684228 4644 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684237 4644 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684246 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684256 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684265 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684273 4644 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684282 4644 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684290 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684299 4644 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684308 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684317 4644 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684777 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684793 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684802 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.684893 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.685087 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.685137 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.685526 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.685550 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686025 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686043 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686105 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686121 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686149 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686212 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686224 4644 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686234 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686244 4644 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686254 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686263 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686272 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686282 4644 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686294 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686304 4644 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686313 4644 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686388 4644 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686397 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686407 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686454 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686466 4644 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686475 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686497 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686510 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686520 4644 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686529 4644 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686537 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686546 4644 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686555 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686563 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686572 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686581 4644 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686590 4644 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686598 4644 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686606 4644 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686614 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686623 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686649 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686661 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686671 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686684 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686693 4644 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686703 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686713 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686724 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686733 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686742 4644 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686751 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686760 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686769 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686778 4644 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686789 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686798 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686809 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686819 4644 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686829 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686839 4644 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686848 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686859 4644 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686869 4644 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686879 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686889 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686897 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686905 4644 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686913 4644 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686921 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686929 4644 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686936 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686945 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686954 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686962 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686970 4644 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686977 4644 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686986 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.686994 4644 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687003 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687012 4644 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687020 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687029 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687037 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687045 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687056 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687065 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687075 4644 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687084 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687093 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687103 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687112 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687121 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687129 4644 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687137 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687146 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687156 4644 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687165 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687174 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687183 4644 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687191 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687200 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687208 4644 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687216 4644 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687225 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687234 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687243 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687252 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687261 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687270 4644 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687279 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687287 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687295 4644 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687304 4644 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687313 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687326 4644 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687348 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687357 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687365 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687373 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687382 4644 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687390 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687399 4644 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687407 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687415 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687424 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687432 4644 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687441 4644 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.687450 4644 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.691362 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.707131 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.707193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.707206 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.707223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.707236 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.716771 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.717610 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 08:41:49 crc kubenswrapper[4644]: W0204 08:41:49.734597 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4e047e033d6cca319dae26548dc9e92219144f881932f457775082043aca36ab WatchSource:0}: Error finding container 4e047e033d6cca319dae26548dc9e92219144f881932f457775082043aca36ab: Status 404 returned error can't find the container with id 4e047e033d6cca319dae26548dc9e92219144f881932f457775082043aca36ab Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.748597 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.774964 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.787974 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.789399 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.795614 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.796717 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.799904 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" exitCode=255 Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.799967 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.800009 4644 scope.go:117] "RemoveContainer" containerID="684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.802082 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4e047e033d6cca319dae26548dc9e92219144f881932f457775082043aca36ab"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.804737 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810629 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810663 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810674 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810702 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.810721 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7dc640afe2c0a08f0738049bc8a8d4366963c7db9021faa38f681bc459a15e94"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.819266 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:41:49 crc kubenswrapper[4644]: E0204 08:41:49.819473 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.819665 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.821057 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.833780 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.843860 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.854617 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.865347 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.875799 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.887441 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.913199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.913267 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.913282 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.913300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.913312 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:49Z","lastTransitionTime":"2026-02-04T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:49 crc kubenswrapper[4644]: I0204 08:41:49.935861 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 08:41:49 crc kubenswrapper[4644]: W0204 08:41:49.945369 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9e3ef825e313fa0e73c0ba17f1744181a2ce9019eea18c93a7474c2cbdc440f5 WatchSource:0}: Error finding container 9e3ef825e313fa0e73c0ba17f1744181a2ce9019eea18c93a7474c2cbdc440f5: Status 404 returned error can't find the container with id 9e3ef825e313fa0e73c0ba17f1744181a2ce9019eea18c93a7474c2cbdc440f5 Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.013930 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.016998 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.017052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.017070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.017093 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.017117 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.028745 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.041512 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.053162 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.066503 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"message\\\":\\\"W0204 08:41:33.813650 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0204 08:41:33.813905 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770194493 cert, and key in /tmp/serving-cert-1163322029/serving-signer.crt, /tmp/serving-cert-1163322029/serving-signer.key\\\\nI0204 08:41:34.035406 1 observer_polling.go:159] Starting file observer\\\\nW0204 08:41:34.038117 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0204 08:41:34.038227 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:34.039934 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1163322029/tls.crt::/tmp/serving-cert-1163322029/tls.key\\\\\\\"\\\\nF0204 08:41:34.324434 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.088989 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.101763 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119507 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119569 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119590 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119601 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119617 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.119628 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.191916 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.192017 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.192049 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.192101 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192118 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:41:51.192091518 +0000 UTC m=+21.232149273 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.192166 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192194 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192243 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192248 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192213 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192256 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192309 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:51.192291823 +0000 UTC m=+21.232349788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192312 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192340 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192352 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192355 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:51.192318754 +0000 UTC m=+21.232376509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192370 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:51.192365165 +0000 UTC m=+21.232422920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.192396 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:51.192388726 +0000 UTC m=+21.232446481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.222725 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.222762 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.222776 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.222800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.222813 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.326176 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.326215 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.326227 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.326246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.326258 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.355611 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.429628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.429683 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.429702 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.429727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.429747 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.532924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.533010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.533024 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.533041 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.533053 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.624913 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:14:32.365115136 +0000 UTC Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.636993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.637028 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.637035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.637052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.637062 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.663712 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.664487 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.665180 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.665861 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.666467 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.666944 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.667570 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.668091 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.668709 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.669205 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.669748 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.670409 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.670887 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.673779 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.673920 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.674279 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.675148 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.675771 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.676148 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.677097 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.677729 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.678159 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.679116 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.679578 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.680646 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.681040 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.682088 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.682759 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.683211 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.684463 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.685056 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.686158 4644 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.686296 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.686421 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.688646 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.689403 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.689924 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.691294 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.692155 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.692767 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.694985 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.695752 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.698456 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.699069 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.700052 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.700743 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.702711 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.703317 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.704369 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.706860 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.708011 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.709163 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.709697 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.710164 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.711122 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.711741 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.716795 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.738188 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684d187f350a7ec40ad4985736cfe8a5703629ede0b0d976452cfb264539060b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"message\\\":\\\"W0204 08:41:33.813650 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0204 08:41:33.813905 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770194493 cert, and key in /tmp/serving-cert-1163322029/serving-signer.crt, /tmp/serving-cert-1163322029/serving-signer.key\\\\nI0204 08:41:34.035406 1 observer_polling.go:159] Starting file observer\\\\nW0204 08:41:34.038117 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0204 08:41:34.038227 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:34.039934 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1163322029/tls.crt::/tmp/serving-cert-1163322029/tls.key\\\\\\\"\\\\nF0204 08:41:34.324434 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.744315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.744393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.744414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.744451 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.744470 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.764690 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.779032 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.791162 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.814641 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.817140 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:41:50 crc kubenswrapper[4644]: E0204 08:41:50.817758 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.817758 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.817869 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.819047 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.819148 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9e3ef825e313fa0e73c0ba17f1744181a2ce9019eea18c93a7474c2cbdc440f5"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.821783 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.838574 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.847347 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.847391 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.847404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.847445 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.847457 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.862462 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.879058 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.894825 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.916057 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.931587 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.947223 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.950040 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.950076 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.950086 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.950105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.950115 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:50Z","lastTransitionTime":"2026-02-04T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:50 crc kubenswrapper[4644]: I0204 08:41:50.985880 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.003644 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.016910 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.035415 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.052663 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.052702 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.052711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.052725 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.052735 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.061931 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.077526 4644 csr.go:261] certificate signing request csr-z9qwj is approved, waiting to be issued Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.087165 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.098871 4644 csr.go:257] certificate signing request csr-z9qwj is issued Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.122408 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:51Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.155256 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.155301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.155337 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.155353 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.155365 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.201846 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.201953 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.201983 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202056 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.202024716 +0000 UTC m=+23.242082471 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202065 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.202094 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.202124 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202147 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.202136899 +0000 UTC m=+23.242194654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202244 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202238 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202303 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202261 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202319 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202354 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202388 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202430 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.202409678 +0000 UTC m=+23.242467433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202455 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.202445779 +0000 UTC m=+23.242503534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.202489 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.202461769 +0000 UTC m=+23.242519514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.257686 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.257746 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.257760 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.257781 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.257799 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.360070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.360109 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.360119 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.360134 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.360145 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.463101 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.463137 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.463147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.463165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.463176 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.566076 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.566116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.566126 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.566142 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.566154 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.625927 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:50:25.273002559 +0000 UTC Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.659505 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.659673 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.660142 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.660201 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.660257 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.660302 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.668457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.668521 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.668531 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.668553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.668563 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.770798 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.770840 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.770849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.770863 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.770872 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.821363 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:41:51 crc kubenswrapper[4644]: E0204 08:41:51.821497 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.873597 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.873630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.873640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.873660 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.873670 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.975973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.976220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.976237 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.976251 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:51 crc kubenswrapper[4644]: I0204 08:41:51.976261 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:51Z","lastTransitionTime":"2026-02-04T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.078031 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.078063 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.078073 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.078087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.078097 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.100504 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-04 08:36:51 +0000 UTC, rotation deadline is 2026-11-23 02:01:20.044270117 +0000 UTC Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.100573 4644 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7001h19m27.943700731s for next certificate rotation Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.109320 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mszlj"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.109658 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.110200 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qwrck"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.110484 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hlsjv"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.110663 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.110738 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.118990 4644 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.119013 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119052 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119104 4644 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119115 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119165 4644 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119180 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119217 4644 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119227 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119266 4644 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119277 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119340 4644 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.119380 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.119452 4644 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119466 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.119372 4644 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.119676 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.119893 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.120009 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.122917 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.142708 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.146181 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.152545 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.159957 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.166400 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.180911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.181059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.181128 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.181208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.181275 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.189309 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.208501 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210674 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae67081-37de-4da9-8ebb-152cd341fcfe-hosts-file\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210716 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-system-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210740 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cni-binary-copy\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210769 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-socket-dir-parent\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210798 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hn5\" (UniqueName: \"kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210824 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2a87f38-c8a0-4007-b926-1dafb84e7483-rootfs\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210855 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2a87f38-c8a0-4007-b926-1dafb84e7483-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.210885 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-etc-kubernetes\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.211782 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-bin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.211897 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-daemon-config\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212019 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-multus-certs\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212150 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cnibin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212228 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-os-release\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212301 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-k8s-cni-cncf-io\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212411 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbptc\" (UniqueName: \"kubernetes.io/projected/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-kube-api-access-kbptc\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212486 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-multus\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212574 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-conf-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gjk\" (UniqueName: \"kubernetes.io/projected/c2a87f38-c8a0-4007-b926-1dafb84e7483-kube-api-access-s6gjk\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212710 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-kubelet\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212803 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-netns\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212875 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-hostroot\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.212953 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.213062 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.221614 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.236497 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.247952 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.262025 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.281419 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.284600 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.284643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.284677 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.284695 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.284704 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.298951 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.312159 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314455 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hn5\" (UniqueName: \"kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314494 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2a87f38-c8a0-4007-b926-1dafb84e7483-rootfs\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314515 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2a87f38-c8a0-4007-b926-1dafb84e7483-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314540 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-etc-kubernetes\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314554 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-os-release\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314570 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-k8s-cni-cncf-io\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314588 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-bin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314609 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-daemon-config\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314630 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-multus-certs\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314681 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cnibin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314700 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbptc\" (UniqueName: \"kubernetes.io/projected/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-kube-api-access-kbptc\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314714 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6gjk\" (UniqueName: \"kubernetes.io/projected/c2a87f38-c8a0-4007-b926-1dafb84e7483-kube-api-access-s6gjk\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314728 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-multus\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314741 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-conf-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314751 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-k8s-cni-cncf-io\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314787 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-kubelet\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314814 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314835 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-netns\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314835 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-etc-kubernetes\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314846 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cnibin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314859 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-hostroot\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314879 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314865 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2a87f38-c8a0-4007-b926-1dafb84e7483-rootfs\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314952 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae67081-37de-4da9-8ebb-152cd341fcfe-hosts-file\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314997 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-multus-certs\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-os-release\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314903 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae67081-37de-4da9-8ebb-152cd341fcfe-hosts-file\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315057 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-system-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315068 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-kubelet\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315078 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cni-binary-copy\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315030 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-conf-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314862 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-multus\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315105 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-socket-dir-parent\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315148 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-socket-dir-parent\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.314889 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-var-lib-cni-bin\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315002 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-host-run-netns\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315193 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-hostroot\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315201 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-system-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315200 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-cni-dir\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315426 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-multus-daemon-config\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.315711 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-cni-binary-copy\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.328977 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.335873 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbptc\" (UniqueName: \"kubernetes.io/projected/7aa20f1c-0ad7-449e-a179-e246a52dfb2a-kube-api-access-kbptc\") pod \"multus-mszlj\" (UID: \"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\") " pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.343666 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.361573 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.377995 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.387042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.387087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.387099 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.387119 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.387131 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.391129 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.401701 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.413794 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.420519 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mszlj" Feb 04 08:41:52 crc kubenswrapper[4644]: W0204 08:41:52.433781 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aa20f1c_0ad7_449e_a179_e246a52dfb2a.slice/crio-3fb28c9d17ac13317f754665acc49a84b11b2c9223c2787684c437cf05d749ac WatchSource:0}: Error finding container 3fb28c9d17ac13317f754665acc49a84b11b2c9223c2787684c437cf05d749ac: Status 404 returned error can't find the container with id 3fb28c9d17ac13317f754665acc49a84b11b2c9223c2787684c437cf05d749ac Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.434962 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.489097 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.489342 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.489421 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.489484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.489543 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.508811 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n6jk7"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.509680 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.509840 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ksbcg"] Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.511082 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.511902 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.512832 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514504 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514644 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514786 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514871 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514901 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.514885 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.515119 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.544084 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.565048 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.586483 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.592381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.592419 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.592428 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.592444 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.592455 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.605629 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618113 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618155 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618171 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618187 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618273 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618300 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618443 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618472 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-system-cni-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618488 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618504 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6cg\" (UniqueName: \"kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618549 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618583 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618605 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618625 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618661 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618680 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618698 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618719 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjzfm\" (UniqueName: \"kubernetes.io/projected/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-kube-api-access-kjzfm\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618762 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618784 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618806 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cnibin\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618826 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-os-release\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618848 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618865 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618879 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618897 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.618913 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.626548 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:44:40.979988434 +0000 UTC Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.628428 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.649473 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.663595 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.677038 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.694779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.694842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.694855 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.694877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.694889 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.697422 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.714946 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719453 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719492 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719510 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cnibin\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719524 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-os-release\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719538 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719552 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719568 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719585 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719606 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719620 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719637 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719654 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719668 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719682 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719700 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719696 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719717 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719737 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719784 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-system-cni-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719696 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cnibin\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719764 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-system-cni-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719824 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719834 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-os-release\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719963 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719984 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.719999 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720019 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720038 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b6cg\" (UniqueName: \"kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720068 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720299 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720339 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720372 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720371 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720407 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720428 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720432 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720449 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720456 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720489 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720508 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720547 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720562 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720583 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720605 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720625 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720609 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjzfm\" (UniqueName: \"kubernetes.io/projected/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-kube-api-access-kjzfm\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720686 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720660 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720740 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720758 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.720825 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.726924 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.737428 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.742150 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b6cg\" (UniqueName: \"kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg\") pod \"ovnkube-node-ksbcg\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.746750 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjzfm\" (UniqueName: \"kubernetes.io/projected/ee94f0f5-c35a-425f-8fbe-1b39b699bb0a-kube-api-access-kjzfm\") pod \"multus-additional-cni-plugins-n6jk7\" (UID: \"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\") " pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.770262 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.791240 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.797369 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.797409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.797423 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.797440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.797454 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.810520 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.822501 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.828571 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.829839 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerStarted","Data":"3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.829927 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerStarted","Data":"3fb28c9d17ac13317f754665acc49a84b11b2c9223c2787684c437cf05d749ac"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.830111 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.830184 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:41:52 crc kubenswrapper[4644]: E0204 08:41:52.830311 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.865626 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.899137 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.910640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.910677 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.910687 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.910702 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.910712 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:52Z","lastTransitionTime":"2026-02-04T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.930472 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.969995 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:52 crc kubenswrapper[4644]: I0204 08:41:52.986002 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:52Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.004494 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.012713 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.012745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.012753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.012770 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.012781 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.017189 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.029607 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.044880 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.059090 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.074984 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.091685 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.092105 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.107702 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.115291 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.115357 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.115367 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.115389 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.115402 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.122974 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.126126 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.141999 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.157855 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.175434 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.187435 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.201469 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.217901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.217956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.217967 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.217988 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.218001 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.221205 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.225246 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.225397 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225504 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:41:57.225461645 +0000 UTC m=+27.265519540 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225542 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225569 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225588 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225647 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:57.225629709 +0000 UTC m=+27.265687464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.225700 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.225749 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.225781 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225905 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225914 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225963 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:57.225949848 +0000 UTC m=+27.266007793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.226004 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:57.225983749 +0000 UTC m=+27.266041504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.225925 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.226032 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.226046 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.226081 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:57.226074782 +0000 UTC m=+27.266132627 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.236808 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.241877 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.245729 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2a87f38-c8a0-4007-b926-1dafb84e7483-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.257656 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.270953 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.283567 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.297311 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.302830 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6gjk\" (UniqueName: \"kubernetes.io/projected/c2a87f38-c8a0-4007-b926-1dafb84e7483-kube-api-access-s6gjk\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.316237 4644 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.316347 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls podName:c2a87f38-c8a0-4007-b926-1dafb84e7483 nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.816309115 +0000 UTC m=+23.856366860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls") pod "machine-config-daemon-qwrck" (UID: "c2a87f38-c8a0-4007-b926-1dafb84e7483") : failed to sync secret cache: timed out waiting for the condition Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.320618 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.320654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.320668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.320685 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.320697 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.326560 4644 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.333270 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.423426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.423488 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.423496 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.423509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.423521 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.477559 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.487703 4644 projected.go:194] Error preparing data for projected volume kube-api-access-b2hn5 for pod openshift-dns/node-resolver-hlsjv: failed to sync configmap cache: timed out waiting for the condition Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.487827 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5 podName:4ae67081-37de-4da9-8ebb-152cd341fcfe nodeName:}" failed. No retries permitted until 2026-02-04 08:41:53.987790062 +0000 UTC m=+24.027847867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b2hn5" (UniqueName: "kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5") pod "node-resolver-hlsjv" (UID: "4ae67081-37de-4da9-8ebb-152cd341fcfe") : failed to sync configmap cache: timed out waiting for the condition Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.493087 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.525651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.525690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.525700 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.525716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.525726 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.626818 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:58:31.477868214 +0000 UTC Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.627985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.628040 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.628058 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.628075 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.628089 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.659637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.659718 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.659761 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.659637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.659862 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:53 crc kubenswrapper[4644]: E0204 08:41:53.659993 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.730041 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.730094 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.730106 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.730123 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.730134 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.830269 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.833414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.833460 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.833479 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.833501 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.833523 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.836844 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2a87f38-c8a0-4007-b926-1dafb84e7483-proxy-tls\") pod \"machine-config-daemon-qwrck\" (UID: \"c2a87f38-c8a0-4007-b926-1dafb84e7483\") " pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.837713 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b" exitCode=0 Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.837760 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.837804 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerStarted","Data":"b1f8962b500c083e3fc94f6de7fc305aa01b5a876a68f0811bd6f8c4d2ef91b1"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.842603 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" exitCode=0 Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.842792 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.842862 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"e03bd3c341ec9b1320cf9d79dfe87ac3db27020751976c47eb039b1142b7033c"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.875288 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.891696 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.915552 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.931937 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.935519 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.935552 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.935560 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.935575 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.935583 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:53Z","lastTransitionTime":"2026-02-04T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.950257 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.974966 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:53 crc kubenswrapper[4644]: I0204 08:41:53.990637 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.006193 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.020631 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.034114 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hn5\" (UniqueName: \"kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.038259 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2hn5\" (UniqueName: \"kubernetes.io/projected/4ae67081-37de-4da9-8ebb-152cd341fcfe-kube-api-access-b2hn5\") pod \"node-resolver-hlsjv\" (UID: \"4ae67081-37de-4da9-8ebb-152cd341fcfe\") " pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039096 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039133 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039148 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039176 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.039909 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.054779 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.072801 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.087115 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.107022 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.130301 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.143116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.143166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.143179 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.143200 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.143211 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.146172 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.167258 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.186571 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.200333 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.217621 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.239270 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.241811 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hlsjv" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.246153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.246198 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.246208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.246228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.246242 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.260806 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: W0204 08:41:54.262698 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae67081_37de_4da9_8ebb_152cd341fcfe.slice/crio-fe1a377c3e4719026753620c860b8b6e81649dc7cca9db808d6ed32539f8be6f WatchSource:0}: Error finding container fe1a377c3e4719026753620c860b8b6e81649dc7cca9db808d6ed32539f8be6f: Status 404 returned error can't find the container with id fe1a377c3e4719026753620c860b8b6e81649dc7cca9db808d6ed32539f8be6f Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.282517 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.298317 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.321386 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.338657 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.350875 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.350913 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.350922 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.350942 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.350953 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.355815 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.453451 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.453486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.453494 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.453523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.453534 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.556304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.556373 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.556388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.556426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.556439 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.627305 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:55:04.279101487 +0000 UTC Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.660309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.660378 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.660389 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.660410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.660422 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.729420 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.745570 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.747124 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.749362 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.762529 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.763572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.763610 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.763622 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.763644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.763654 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.777880 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.795284 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.810275 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.822857 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.834848 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.846018 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852525 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852577 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852603 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852615 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852629 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.852641 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.854676 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a" exitCode=0 Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.854733 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.857465 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hlsjv" event={"ID":"4ae67081-37de-4da9-8ebb-152cd341fcfe","Type":"ContainerStarted","Data":"a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.857507 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hlsjv" event={"ID":"4ae67081-37de-4da9-8ebb-152cd341fcfe","Type":"ContainerStarted","Data":"fe1a377c3e4719026753620c860b8b6e81649dc7cca9db808d6ed32539f8be6f"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.862832 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.862876 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.862889 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"030a16937e1f70aaa5ef9eb2dd21ab67263268caf0f003085090a98dc7fbc53a"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.866734 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.866772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.866784 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.866798 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.866810 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.869919 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.886515 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.903617 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.919735 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.932918 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.966757 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.969550 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.969582 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.969589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.969602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.969613 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:54Z","lastTransitionTime":"2026-02-04T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.979322 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:54 crc kubenswrapper[4644]: I0204 08:41:54.992526 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:54Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.007234 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.019649 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.034288 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.052971 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.070641 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.071347 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.071371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.071380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.071393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.071402 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.090193 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.107864 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.131461 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.168117 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.173845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.173884 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.173897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.173922 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.173938 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.200384 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.243721 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.276907 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.276970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.276983 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.277009 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.277023 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.379213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.379267 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.379284 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.379308 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.379351 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.482279 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.482347 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.482358 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.482377 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.482396 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.585257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.585312 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.585339 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.585361 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.585373 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.627660 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:13:14.668100494 +0000 UTC Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.659173 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.659254 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:55 crc kubenswrapper[4644]: E0204 08:41:55.659300 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.659355 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:55 crc kubenswrapper[4644]: E0204 08:41:55.659482 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:55 crc kubenswrapper[4644]: E0204 08:41:55.659562 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.687884 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.687918 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.687927 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.687941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.687950 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.790165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.790512 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.790523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.790537 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.790550 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.868198 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63" exitCode=0 Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.868887 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.895947 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.900363 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.900397 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.900409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.900427 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.900439 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:55Z","lastTransitionTime":"2026-02-04T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.920801 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.949829 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.964557 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:55 crc kubenswrapper[4644]: I0204 08:41:55.983352 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.002128 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:55Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.003240 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.003278 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.003287 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.003305 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.003315 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.014964 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.024035 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.042454 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.056897 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.078453 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.090554 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.101228 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.105312 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.105353 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.105363 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.105379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.105388 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.112699 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.208476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.208520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.208531 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.208550 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.208562 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.310561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.310601 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.310612 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.310626 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.310635 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.413116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.413491 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.413506 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.413530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.413543 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.516071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.516110 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.516120 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.516133 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.516142 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.618141 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.618174 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.618184 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.618196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.618206 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.627988 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:57:45.109053597 +0000 UTC Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.704676 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ckvx5"] Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.705065 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.708219 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.708426 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.709126 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.709137 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.720411 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.721213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.721266 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.721282 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.721304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.721318 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.734994 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.747843 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.760312 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjbd2\" (UniqueName: \"kubernetes.io/projected/2da145d4-49d5-4b6f-b177-2d900eb63147-kube-api-access-zjbd2\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.760370 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2da145d4-49d5-4b6f-b177-2d900eb63147-serviceca\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.760389 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2da145d4-49d5-4b6f-b177-2d900eb63147-host\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.770023 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.780877 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.797786 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.807669 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.818582 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.823664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.823707 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.823721 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.823741 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.823754 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.829927 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.841120 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.851434 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.861412 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2da145d4-49d5-4b6f-b177-2d900eb63147-host\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.861523 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjbd2\" (UniqueName: \"kubernetes.io/projected/2da145d4-49d5-4b6f-b177-2d900eb63147-kube-api-access-zjbd2\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.861537 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2da145d4-49d5-4b6f-b177-2d900eb63147-host\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.861554 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2da145d4-49d5-4b6f-b177-2d900eb63147-serviceca\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.862613 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2da145d4-49d5-4b6f-b177-2d900eb63147-serviceca\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.868784 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.874778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.876859 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90" exitCode=0 Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.876887 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.878990 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjbd2\" (UniqueName: \"kubernetes.io/projected/2da145d4-49d5-4b6f-b177-2d900eb63147-kube-api-access-zjbd2\") pod \"node-ca-ckvx5\" (UID: \"2da145d4-49d5-4b6f-b177-2d900eb63147\") " pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.882245 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.900730 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.914968 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.935790 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.935822 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.935831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.935845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.935855 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:56Z","lastTransitionTime":"2026-02-04T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.938546 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.957226 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.969976 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.983804 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:56 crc kubenswrapper[4644]: I0204 08:41:56.997146 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.010503 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.017366 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ckvx5" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.023185 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.040132 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.040172 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.040182 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.040196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.040207 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.045301 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.056229 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.070725 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.087066 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.100048 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.111605 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.122754 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.136115 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.145071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.145102 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.145111 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.145124 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.145133 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.248641 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.249111 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.249148 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.249171 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.249182 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.264940 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.265024 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.265054 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.265079 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.265107 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265245 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265263 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265276 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265315 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:05.265301685 +0000 UTC m=+35.305359440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265386 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:42:05.265378997 +0000 UTC m=+35.305436742 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265429 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265438 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265446 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265466 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:05.265460159 +0000 UTC m=+35.305517914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265496 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265526 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:05.265521141 +0000 UTC m=+35.305578896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265691 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.265809 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:05.265783908 +0000 UTC m=+35.305841733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.351398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.351438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.351455 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.351471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.351481 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.454360 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.454388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.454398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.454411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.454422 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.557236 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.557272 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.557283 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.557300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.557310 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.628754 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:19:46.453835183 +0000 UTC Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.658797 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.658921 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.659249 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.659309 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.659422 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:57 crc kubenswrapper[4644]: E0204 08:41:57.659483 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.660447 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.660494 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.660512 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.660530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.660542 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.762989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.763028 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.763037 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.763053 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.763063 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.865873 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.866153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.866457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.866660 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.866779 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.885182 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ckvx5" event={"ID":"2da145d4-49d5-4b6f-b177-2d900eb63147","Type":"ContainerStarted","Data":"bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.885224 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ckvx5" event={"ID":"2da145d4-49d5-4b6f-b177-2d900eb63147","Type":"ContainerStarted","Data":"b54330d1ccfef0fe748be71a7ad6f2cb2bd1203aff270fc993ed321702711086"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.890395 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a" exitCode=0 Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.890448 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.909800 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.926349 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.946883 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.969854 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.970452 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.970497 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.970523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.970538 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:57Z","lastTransitionTime":"2026-02-04T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.972214 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:57 crc kubenswrapper[4644]: I0204 08:41:57.990478 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.005996 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.019036 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.031559 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.045269 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.057962 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.074120 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.074150 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.074159 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.074174 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.074186 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.075542 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.085585 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.104108 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.115117 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.127386 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.141897 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.154552 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.167458 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.176862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.176905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.176917 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.176934 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.176946 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.181204 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.200227 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.212660 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.234501 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.247393 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.261235 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.279685 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.279721 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.279733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.279748 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.279759 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.290106 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.315249 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.332262 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.351843 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.364061 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.374679 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.382299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.382350 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.382365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.382380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.382391 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.484413 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.484459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.484470 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.484487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.484524 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.587753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.587792 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.587802 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.587816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.587825 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.630104 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:01:57.211046494 +0000 UTC Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.690507 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.690743 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.690752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.690766 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.690776 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.793228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.793556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.793646 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.793741 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.793826 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.896788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.896845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.896861 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.896883 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.896900 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:58Z","lastTransitionTime":"2026-02-04T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.906969 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.907495 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.915704 4644 generic.go:334] "Generic (PLEG): container finished" podID="ee94f0f5-c35a-425f-8fbe-1b39b699bb0a" containerID="d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19" exitCode=0 Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.915750 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerDied","Data":"d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19"} Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.928778 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.946510 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.948095 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.962689 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.976551 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:58 crc kubenswrapper[4644]: I0204 08:41:58.985800 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000106 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000130 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000139 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.000894 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.015389 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.034817 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.045790 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.061786 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.077275 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.091824 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.103935 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.103972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.103980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.103995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.104005 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.109253 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.127496 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.137354 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.147862 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.162813 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.173185 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.193408 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.206232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.206272 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.206283 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.206302 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.206315 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.209129 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.224648 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.242103 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.255748 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.275222 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.301410 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.309779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.309871 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.309899 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.309930 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.309972 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.341714 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.385095 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.412822 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.412885 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.412897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.412929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.412940 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.429110 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.462593 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.503710 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.516744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.516809 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.516824 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.516849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.516866 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.619552 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.619585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.619595 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.619610 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.619620 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.631029 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:12:42.124651813 +0000 UTC Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.635223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.635254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.635263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.635275 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.635291 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.652687 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.657222 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.657253 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.657262 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.657277 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.657289 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.659679 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.659781 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.659881 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.659906 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.659972 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.660118 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.673076 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.678477 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.678525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.678542 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.678567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.678586 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.700115 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.708399 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.708440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.708453 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.708468 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.708480 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.722640 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.726687 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.726788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.726817 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.726838 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.726855 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.741784 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: E0204 08:41:59.741979 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.743740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.743816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.743835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.743859 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.743877 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.846992 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.847448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.847550 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.847717 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.847811 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.925439 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" event={"ID":"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a","Type":"ContainerStarted","Data":"abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.925907 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.926111 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.942306 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.952426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.952476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.952487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.952508 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.952526 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:41:59Z","lastTransitionTime":"2026-02-04T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.963537 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.977542 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:41:59 crc kubenswrapper[4644]: I0204 08:41:59.994932 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.009196 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.031505 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.047220 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.056189 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.056221 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.056240 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.056255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.056267 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.066245 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.078549 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.091404 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.102208 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.117069 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.132206 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.142147 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.154552 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.158292 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.158354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.158362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.158376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.158386 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.262033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.262115 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.262133 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.262156 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.262182 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.365984 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.366049 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.366065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.366082 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.366092 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.419591 4644 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.469588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.469727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.469742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.469762 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.469773 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.573403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.573448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.573460 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.573476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.573488 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.632007 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:36:29.368581601 +0000 UTC Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.671387 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.675577 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.675637 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.675652 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.675673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.675685 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.684464 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.697936 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.716404 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.734780 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.738717 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.783946 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.784497 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.784531 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.784546 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.784567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.784584 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.801196 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.819938 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.833179 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.852088 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.871590 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.887447 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.887524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.887544 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.887584 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.887598 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.889313 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.905010 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.921571 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.928499 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.936486 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.953082 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.966658 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.987191 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.990144 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.990285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.990387 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.990472 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:00 crc kubenswrapper[4644]: I0204 08:42:00.990535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:00Z","lastTransitionTime":"2026-02-04T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.005371 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.018392 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.028462 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.041268 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.053430 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.076316 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.093153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.093208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.093220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.093243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.093256 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.102838 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.144491 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.186075 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.195891 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.195940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.195953 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.195978 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.195992 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.222413 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.261990 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.298902 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.298954 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.298966 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.298989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.299005 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.304720 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.401081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.401114 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.401124 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.401139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.401149 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.503658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.503734 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.503749 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.503768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.503781 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.605950 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.605995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.606010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.606029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.606042 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.632548 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:19:07.222684438 +0000 UTC Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.659505 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:01 crc kubenswrapper[4644]: E0204 08:42:01.659685 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.659734 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.659833 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:01 crc kubenswrapper[4644]: E0204 08:42:01.659905 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:01 crc kubenswrapper[4644]: E0204 08:42:01.660060 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.715062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.715150 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.715173 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.715203 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.715225 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.819557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.819624 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.819638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.819664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.819678 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.922716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.922760 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.922772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.922793 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.922808 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:01Z","lastTransitionTime":"2026-02-04T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:01 crc kubenswrapper[4644]: I0204 08:42:01.932790 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.026821 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.026891 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.026915 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.026950 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.026975 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.130551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.130598 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.130611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.130635 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.130649 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.232917 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.232956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.232965 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.232982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.232993 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.335585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.335641 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.335653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.335673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.335682 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.438034 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.438282 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.438396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.438513 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.438668 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.541493 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.541528 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.541536 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.541551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.541559 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.633568 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:03:48.909087172 +0000 UTC Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.645815 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.645843 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.645851 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.645862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.645876 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.777819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.777856 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.777867 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.777882 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.777894 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.890785 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.890829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.890841 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.890857 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.890867 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.994543 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.994593 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.994608 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.994627 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:02 crc kubenswrapper[4644]: I0204 08:42:02.994642 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:02Z","lastTransitionTime":"2026-02-04T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.097054 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.097097 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.097109 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.097127 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.097155 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.199940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.199994 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.200010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.200032 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.200049 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.301978 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.302017 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.302028 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.302044 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.302056 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.404002 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.404036 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.404047 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.404062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.404073 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.507403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.507480 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.507503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.507533 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.507557 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.610272 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.610375 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.610394 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.610420 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.610439 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.634213 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:01:09.442500413 +0000 UTC Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.659709 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.659796 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.659717 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:03 crc kubenswrapper[4644]: E0204 08:42:03.660047 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:03 crc kubenswrapper[4644]: E0204 08:42:03.660189 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:03 crc kubenswrapper[4644]: E0204 08:42:03.660317 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.713220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.713278 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.713295 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.713365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.713384 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.816572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.816640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.816658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.816688 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.816714 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.919423 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.919471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.919484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.919502 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.919514 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:03Z","lastTransitionTime":"2026-02-04T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.941566 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/0.log" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.946045 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0" exitCode=1 Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.946105 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0"} Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.947590 4644 scope.go:117] "RemoveContainer" containerID="b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.963466 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:03 crc kubenswrapper[4644]: I0204 08:42:03.986556 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.009084 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.023371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.023403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.023411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.023425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.023434 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.026450 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.038373 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.054982 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.072364 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.091857 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.101546 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.125278 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.125342 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.125354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.125371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.125382 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.126513 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.163356 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.179629 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.203475 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.220982 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.228354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.228418 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.228434 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.228459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.228477 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.236363 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.307404 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd"] Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.307906 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.309695 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.347430 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.351041 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.351103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.351117 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.351139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.351153 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.369637 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.384732 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.403377 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.419449 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.435745 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.448626 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7zw\" (UniqueName: \"kubernetes.io/projected/37e2958d-0c33-4fd2-a696-d789be254111-kube-api-access-7h7zw\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.448673 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37e2958d-0c33-4fd2-a696-d789be254111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.448725 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.448815 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.449607 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.453404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.453433 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.453445 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.453470 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.453483 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.473117 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.486028 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.500484 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.516719 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.531271 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.546584 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.549414 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7zw\" (UniqueName: \"kubernetes.io/projected/37e2958d-0c33-4fd2-a696-d789be254111-kube-api-access-7h7zw\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.549626 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37e2958d-0c33-4fd2-a696-d789be254111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.549711 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.549795 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.550497 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.550544 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37e2958d-0c33-4fd2-a696-d789be254111-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.555157 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.555393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.555461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.555525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.555578 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.557670 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37e2958d-0c33-4fd2-a696-d789be254111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.570281 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.579105 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7zw\" (UniqueName: \"kubernetes.io/projected/37e2958d-0c33-4fd2-a696-d789be254111-kube-api-access-7h7zw\") pod \"ovnkube-control-plane-749d76644c-x9rsd\" (UID: \"37e2958d-0c33-4fd2-a696-d789be254111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.594591 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.624846 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.634312 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:28:08.697107864 +0000 UTC Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.641313 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.657607 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.659270 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.662465 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.662492 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.662501 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.662513 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.662522 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.765263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.765303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.765314 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.765335 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.765360 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.867294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.867350 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.867361 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.867376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.867388 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.951187 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/0.log" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.953580 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.953688 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.956422 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" event={"ID":"37e2958d-0c33-4fd2-a696-d789be254111","Type":"ContainerStarted","Data":"281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.956449 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" event={"ID":"37e2958d-0c33-4fd2-a696-d789be254111","Type":"ContainerStarted","Data":"19b05d37109ad01bbff1bbcaa07f24bbfa5a680f0f7d03bd117ac88bcb379f87"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.961561 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.963267 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.963621 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.968710 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.969574 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.969602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.969612 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.969627 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.969638 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:04Z","lastTransitionTime":"2026-02-04T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:04 crc kubenswrapper[4644]: I0204 08:42:04.983999 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:04Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.014112 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.026079 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.039992 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.055473 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.072393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.072439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.072453 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.072471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.072483 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.077946 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.096278 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.111381 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.131267 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.154188 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.168707 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.174995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.175147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.175269 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.175333 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.175420 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.189057 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.211526 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.229654 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.248132 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.266606 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.278482 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.278813 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.278907 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.278996 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.279076 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.278914 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.298759 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.311275 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.325891 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.341209 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358045 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358043 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358179 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358208 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358233 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.358260 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358367 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358388 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358402 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358446 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358367 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358401 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358498 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358514 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358449 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.358433019 +0000 UTC m=+51.398490784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358543 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.358527261 +0000 UTC m=+51.398585016 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358558 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.358550012 +0000 UTC m=+51.398607777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358590 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.358564272 +0000 UTC m=+51.398622027 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.358605 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.358598113 +0000 UTC m=+51.398655868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.370200 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.381729 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.381761 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.381772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.381790 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.381801 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.385346 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.402177 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.418185 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.433892 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.455292 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.468392 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.482635 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.484301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.484331 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.484340 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.484367 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.484376 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.502255 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.587052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.587098 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.587108 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.587125 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.587136 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.634933 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:00:32.416399037 +0000 UTC Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.659614 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.659684 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.659747 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.659823 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.660078 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.660259 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.690830 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.691207 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.691281 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.691379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.691452 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.795199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.795256 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.795270 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.795289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.795303 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.897640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.897690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.897705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.897724 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.897738 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:05Z","lastTransitionTime":"2026-02-04T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.968356 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/1.log" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.968973 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/0.log" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.971836 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546" exitCode=1 Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.971901 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.971960 4644 scope.go:117] "RemoveContainer" containerID="b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.972639 4644 scope.go:117] "RemoveContainer" containerID="46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546" Feb 04 08:42:05 crc kubenswrapper[4644]: E0204 08:42:05.972812 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.975989 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" event={"ID":"37e2958d-0c33-4fd2-a696-d789be254111","Type":"ContainerStarted","Data":"797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b"} Feb 04 08:42:05 crc kubenswrapper[4644]: I0204 08:42:05.987290 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:05Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.000637 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.000839 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.000972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.001080 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.001173 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.004875 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.023847 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.046526 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.063084 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.084763 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.098161 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.108976 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.109041 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.109050 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.109062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.109071 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.112347 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.135264 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.154175 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.169257 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.173409 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-f6ghp"] Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.174056 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: E0204 08:42:06.174138 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.183433 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.195985 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.208157 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.212232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.212281 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.212296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.212376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.212394 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.230146 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.242847 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.258887 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.267572 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.267669 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkztj\" (UniqueName: \"kubernetes.io/projected/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-kube-api-access-xkztj\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.276077 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.290579 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.315812 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.315874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.315887 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.315909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.315924 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.319022 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.339108 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.365688 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.369126 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.369241 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkztj\" (UniqueName: \"kubernetes.io/projected/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-kube-api-access-xkztj\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: E0204 08:42:06.369398 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:06 crc kubenswrapper[4644]: E0204 08:42:06.369557 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:06.86952348 +0000 UTC m=+36.909581265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.380841 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.396901 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.399832 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkztj\" (UniqueName: \"kubernetes.io/projected/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-kube-api-access-xkztj\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.420254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.420314 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.420330 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.420384 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.420416 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.422982 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.439465 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.452562 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.470532 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.482279 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.497137 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.513023 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.524383 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.524443 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.524460 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.524488 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.524507 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.528875 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.542059 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.628627 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.628686 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.628700 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.628727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.628740 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.635948 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:04:10.617562816 +0000 UTC Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.732404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.732457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.732467 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.732486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.732500 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.835268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.835302 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.835314 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.835356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.835374 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.874778 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:06 crc kubenswrapper[4644]: E0204 08:42:06.875074 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:06 crc kubenswrapper[4644]: E0204 08:42:06.875185 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:07.875152098 +0000 UTC m=+37.915209903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.938140 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.938180 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.938193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.938213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.938227 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:06Z","lastTransitionTime":"2026-02-04T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:06 crc kubenswrapper[4644]: I0204 08:42:06.981644 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/1.log" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.040572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.040609 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.040619 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.040636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.040648 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.143210 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.143248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.143259 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.143274 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.143285 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.246556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.246613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.246636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.246686 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.246710 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.349996 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.350129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.350159 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.350189 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.350210 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.453189 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.453256 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.453280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.453310 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.453366 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.556000 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.556062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.556075 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.556093 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.556105 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.636423 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:12:06.877947347 +0000 UTC Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.660678 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.660767 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.661051 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.661516 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.661611 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.662082 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.662256 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.662464 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.663759 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.663839 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.663869 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.663895 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.663976 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.766692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.766723 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.766731 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.766745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.766754 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.870090 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.870158 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.870173 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.870197 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.870211 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.885742 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.885932 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:07 crc kubenswrapper[4644]: E0204 08:42:07.886031 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:09.886009313 +0000 UTC m=+39.926067068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.972785 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.972819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.972827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.972840 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:07 crc kubenswrapper[4644]: I0204 08:42:07.972848 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:07Z","lastTransitionTime":"2026-02-04T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.075313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.075436 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.075492 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.075517 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.075535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.179159 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.179223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.179248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.179279 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.179306 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.282454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.282508 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.282525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.282548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.282567 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.385539 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.385578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.385590 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.385606 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.385617 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.488641 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.488688 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.488706 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.488731 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.488748 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.591235 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.591291 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.591310 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.591367 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.591386 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.636592 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:00:55.798075644 +0000 UTC Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.694203 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.694247 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.694258 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.694275 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.694288 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.797770 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.798127 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.798256 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.798377 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.798462 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.901923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.901996 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.902017 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.902046 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:08 crc kubenswrapper[4644]: I0204 08:42:08.902068 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:08Z","lastTransitionTime":"2026-02-04T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.004957 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.005038 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.005059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.005087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.005109 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.108637 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.108700 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.108718 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.108742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.108759 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.211871 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.211940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.211957 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.211985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.212005 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.315033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.315102 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.315121 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.315149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.315169 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.418851 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.418902 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.418911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.418928 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.418938 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.522382 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.522461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.522487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.522520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.522541 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.626091 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.626141 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.626153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.626171 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.626181 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.638339 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:05:15.899665936 +0000 UTC Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.658992 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.659011 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.659039 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.658997 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.659137 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.659237 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.659316 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.659426 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.729522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.729580 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.729591 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.729611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.729624 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.835828 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.835872 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.835886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.835906 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.835922 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.909042 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.909291 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.909400 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:13.909376247 +0000 UTC m=+43.949434022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.936922 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.936963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.936973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.936987 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.936995 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.948727 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:09Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.952093 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.952117 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.952126 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.952139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.952148 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.964627 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:09Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.967496 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.967525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.967536 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.967551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.967562 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.978147 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:09Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.980854 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.980877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.980885 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.980897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.980905 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:09 crc kubenswrapper[4644]: E0204 08:42:09.991828 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:09Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.994606 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.994635 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.994646 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.994679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:09 crc kubenswrapper[4644]: I0204 08:42:09.995248 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:09Z","lastTransitionTime":"2026-02-04T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: E0204 08:42:10.007110 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: E0204 08:42:10.007227 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.008524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.008555 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.008564 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.008576 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.008585 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.111542 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.111635 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.111669 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.111719 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.111741 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.215761 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.215832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.215855 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.215882 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.215905 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.318923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.318981 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.318990 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.319009 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.319038 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.422244 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.422283 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.422313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.422356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.422368 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.525258 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.525318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.525372 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.525389 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.525403 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.628109 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.628171 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.628190 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.628213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.628231 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.638698 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:18:50.309659013 +0000 UTC Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.673599 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.687646 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.699495 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.713754 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.730178 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.730215 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.730226 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.730241 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.730252 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.735998 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.754250 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.770511 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.787954 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.803481 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.817700 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.831866 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.831907 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.831919 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.831937 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.831950 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.841045 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.857105 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.878241 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.891239 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.921045 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0421cc690ff8b000d853711e46521525b1418a874e78147f507ce3e8386f8b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:02Z\\\",\\\"message\\\":\\\"ory.go:160\\\\nI0204 08:42:02.799468 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799524 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.799743 5831 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800082 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:02.800189 5831 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800281 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:02.800724 5831 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:02.800754 5831 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 08:42:02.800773 5831 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:02.800781 5831 factory.go:656] Stopping watch factory\\\\nI0204 08:42:02.800804 5831 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.934868 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.935179 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.935266 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.935391 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.935491 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:10Z","lastTransitionTime":"2026-02-04T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.937732 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:10 crc kubenswrapper[4644]: I0204 08:42:10.952364 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.041926 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.042001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.042023 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.042050 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.042077 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.144630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.144677 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.144694 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.144716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.144733 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.247937 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.247988 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.248011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.248045 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.248066 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.351792 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.351859 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.351879 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.351905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.351925 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.371757 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.373085 4644 scope.go:117] "RemoveContainer" containerID="46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546" Feb 04 08:42:11 crc kubenswrapper[4644]: E0204 08:42:11.373427 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.396875 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.415534 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.447515 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.456452 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.456520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.456538 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.456567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.456585 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.465968 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.487912 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.508503 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.531607 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.551888 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.560322 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.560447 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.560471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.560500 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.560523 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.565052 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.584290 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.598091 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.611124 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.623573 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.639262 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:00:11.590356016 +0000 UTC Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.646698 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.658923 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.658972 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.658925 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:11 crc kubenswrapper[4644]: E0204 08:42:11.659090 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.659106 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:11 crc kubenswrapper[4644]: E0204 08:42:11.659178 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:11 crc kubenswrapper[4644]: E0204 08:42:11.659253 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:11 crc kubenswrapper[4644]: E0204 08:42:11.659294 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.660300 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.663422 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.663462 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.663478 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.663499 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.663514 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.676895 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.690751 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.766803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.766844 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.766857 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.766875 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.766888 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.911745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.911791 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.911807 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.911827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:11 crc kubenswrapper[4644]: I0204 08:42:11.911846 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:11Z","lastTransitionTime":"2026-02-04T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.014436 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.014796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.015035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.015228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.015466 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.118309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.118523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.118553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.118583 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.118603 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.221651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.221711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.221737 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.221765 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.221786 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.324601 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.324910 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.325046 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.325173 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.325371 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.430109 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.430170 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.430233 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.430263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.430281 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.533729 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.533801 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.533824 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.533853 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.533876 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.637299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.637389 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.637407 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.637429 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.637447 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.639505 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:03:20.650620711 +0000 UTC Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.740202 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.740257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.740267 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.740289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.740301 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.844503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.844934 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.845653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.845998 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.846132 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.950875 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.950938 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.950951 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.950972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:12 crc kubenswrapper[4644]: I0204 08:42:12.950987 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:12Z","lastTransitionTime":"2026-02-04T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.053045 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.053107 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.053118 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.053134 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.053148 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.155710 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.155779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.155796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.155818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.155834 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.259090 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.259160 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.259183 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.259213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.259233 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.362397 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.362819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.362982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.363219 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.363440 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.466062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.466114 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.466131 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.466153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.466170 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.569901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.570229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.570510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.570831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.571185 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.639669 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:46:22.570888467 +0000 UTC Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.659076 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.659152 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.659223 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.659248 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.659315 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.659479 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.659603 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.659468 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.675655 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.675711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.675740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.675755 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.675764 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.778586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.779051 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.779211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.779400 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.779546 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.883158 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.883549 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.883809 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.883968 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.884107 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.950868 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.951124 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:13 crc kubenswrapper[4644]: E0204 08:42:13.951245 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:21.951213722 +0000 UTC m=+51.991271507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.988550 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.988612 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.988630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.988654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:13 crc kubenswrapper[4644]: I0204 08:42:13.988671 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:13Z","lastTransitionTime":"2026-02-04T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.093081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.093469 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.093830 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.094313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.095937 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.199523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.199578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.199587 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.199613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.199629 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.305466 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.305519 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.305528 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.305684 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.305697 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.408751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.408805 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.408820 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.408842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.408857 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.510952 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.511241 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.511380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.511472 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.511571 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.614079 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.614321 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.614411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.614510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.614568 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.640956 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:45:01.108057564 +0000 UTC Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.716920 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.716968 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.716984 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.717005 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.717017 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.819289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.819543 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.819624 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.819693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.819754 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.922908 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.923257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.923524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.923775 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:14 crc kubenswrapper[4644]: I0204 08:42:14.923990 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:14Z","lastTransitionTime":"2026-02-04T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.026668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.026706 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.026716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.026732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.026743 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.129292 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.129367 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.129387 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.129406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.129419 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.232525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.232586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.232599 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.232618 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.232632 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.336301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.336379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.336391 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.336409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.336424 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.439250 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.439306 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.439322 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.439368 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.439383 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.543407 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.543441 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.543452 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.543468 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.543480 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.641554 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:38:17.926963242 +0000 UTC Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.645498 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.645539 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.645552 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.645570 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.645580 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.658867 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.658867 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:15 crc kubenswrapper[4644]: E0204 08:42:15.659069 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.658893 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:15 crc kubenswrapper[4644]: E0204 08:42:15.659260 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.658873 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:15 crc kubenswrapper[4644]: E0204 08:42:15.659432 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:15 crc kubenswrapper[4644]: E0204 08:42:15.659480 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.747951 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.747978 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.747986 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.747998 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.748007 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.849998 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.850054 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.850067 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.850085 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.850098 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.953158 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.953234 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.953257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.953286 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:15 crc kubenswrapper[4644]: I0204 08:42:15.953306 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:15Z","lastTransitionTime":"2026-02-04T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.056940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.057003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.057025 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.057059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.057082 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.159502 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.159579 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.159606 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.159638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.159662 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.262179 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.262246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.262268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.262298 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.262321 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.366070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.366125 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.366137 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.366153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.366165 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.468648 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.468690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.468700 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.468716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.468727 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.570765 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.570815 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.570830 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.570849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.570863 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.642654 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:55:02.519219385 +0000 UTC Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.673851 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.673909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.673932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.673960 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.673980 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.776765 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.776823 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.776835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.776877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.776887 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.880442 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.880488 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.880499 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.880519 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.880532 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.983796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.983832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.983841 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.983859 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:16 crc kubenswrapper[4644]: I0204 08:42:16.983871 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:16Z","lastTransitionTime":"2026-02-04T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.088426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.088503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.088516 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.088540 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.088566 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.191972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.192103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.192116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.192137 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.192151 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.295835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.295886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.295901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.295924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.295939 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.399736 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.399788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.399801 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.399827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.399844 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.502517 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.502559 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.502572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.502596 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.502611 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.605218 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.605597 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.605749 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.606087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.606396 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.643827 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:31:19.516309099 +0000 UTC Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.659278 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.659546 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.659427 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.659299 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:17 crc kubenswrapper[4644]: E0204 08:42:17.659876 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:17 crc kubenswrapper[4644]: E0204 08:42:17.660097 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:17 crc kubenswrapper[4644]: E0204 08:42:17.660208 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:17 crc kubenswrapper[4644]: E0204 08:42:17.660420 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.710410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.710504 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.710524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.710561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.710582 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.813303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.813643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.813769 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.813889 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.814006 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.917606 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.917650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.917660 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.917676 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:17 crc kubenswrapper[4644]: I0204 08:42:17.917687 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:17Z","lastTransitionTime":"2026-02-04T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.020744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.020806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.020823 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.020846 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.020863 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.124732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.124789 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.124810 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.124837 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.124857 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.227567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.227629 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.227648 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.227673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.227692 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.330303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.330576 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.330685 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.330814 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.330905 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.434255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.434299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.434309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.434346 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.434357 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.537120 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.537406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.537498 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.537582 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.537669 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.641366 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.641432 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.641453 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.641483 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.641505 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.645443 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:26:29.486843862 +0000 UTC Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.746611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.746678 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.746691 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.746733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.746746 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.850658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.850948 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.851048 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.851139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.851227 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.954219 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.954580 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.954667 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.954788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:18 crc kubenswrapper[4644]: I0204 08:42:18.954873 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:18Z","lastTransitionTime":"2026-02-04T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.057751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.057806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.057828 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.057864 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.057888 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.160393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.160483 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.160508 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.160535 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.160554 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.263727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.264071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.264294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.264556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.264718 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.368210 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.368757 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.368993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.369203 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.369406 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.472716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.473098 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.473259 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.473455 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.473622 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.578116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.578188 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.578205 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.578231 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.578249 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.645932 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:43:01.037672481 +0000 UTC Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.659303 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:19 crc kubenswrapper[4644]: E0204 08:42:19.659641 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.659419 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:19 crc kubenswrapper[4644]: E0204 08:42:19.659908 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.659513 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.659456 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:19 crc kubenswrapper[4644]: E0204 08:42:19.660232 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:19 crc kubenswrapper[4644]: E0204 08:42:19.660424 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.681449 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.681533 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.681558 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.681588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.681610 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.784354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.784417 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.784436 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.784464 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.784489 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.887417 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.887491 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.887512 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.887541 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.887565 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.990362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.990399 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.990410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.990422 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:19 crc kubenswrapper[4644]: I0204 08:42:19.990431 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:19Z","lastTransitionTime":"2026-02-04T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.087011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.087180 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.087193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.087219 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.087233 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.103628 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.108202 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.108255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.108265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.108285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.108300 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.129416 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.133561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.133626 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.133644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.133671 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.133690 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.148662 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.153664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.153718 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.153731 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.153752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.153764 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.170065 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.175261 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.175303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.175315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.175356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.175370 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.188818 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: E0204 08:42:20.188968 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.192705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.192733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.192741 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.192754 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.192764 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.296276 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.296301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.296309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.296322 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.296345 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.331790 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.346095 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.348107 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.370517 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.384738 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.399498 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.399561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.399582 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.399625 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.399645 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.402106 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.415986 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.430623 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.447143 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.461802 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.472383 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.492576 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.501814 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.501925 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.501945 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.502019 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.502049 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.511436 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.530876 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.546249 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.568948 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.586868 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.605081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.605129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.605144 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.605166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.605181 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.609259 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.642994 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.646963 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:57:36.170428174 +0000 UTC Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.675319 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.696282 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.708723 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.708993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.709081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.709173 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.709271 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.719140 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.737946 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.761052 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.773589 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.790934 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.805440 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.812304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.812374 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.812385 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.812406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.812420 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.823495 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.838620 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.855604 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.869403 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.887241 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.900959 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.915026 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.915057 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.915068 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.915086 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.915097 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:20Z","lastTransitionTime":"2026-02-04T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.917054 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.939177 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.957788 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:20 crc kubenswrapper[4644]: I0204 08:42:20.977079 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.018527 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.018572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.018587 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.018616 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.018631 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.122125 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.122225 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.122246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.122299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.122317 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.225657 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.225738 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.225762 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.225802 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.225827 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.329715 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.329780 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.329800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.329830 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.329852 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.434438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.434548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.434566 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.434593 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.434615 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.439870 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.440014 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.440090 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:42:53.440057478 +0000 UTC m=+83.480115273 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.440234 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.440268 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.440297 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.440410 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:53.440391408 +0000 UTC m=+83.480449203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.440973 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.441176 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441111 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441507 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:53.441491718 +0000 UTC m=+83.481549503 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441298 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441562 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:53.441550689 +0000 UTC m=+83.481608474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.441414 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441739 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441780 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441800 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.441888 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:53.441861418 +0000 UTC m=+83.481919183 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.537962 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.538023 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.538044 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.538068 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.538088 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.641768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.641831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.641856 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.641891 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.641919 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.648034 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:44:47.93390376 +0000 UTC Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.659483 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.659620 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.659640 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.659639 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.659805 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.659923 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.660070 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:21 crc kubenswrapper[4644]: E0204 08:42:21.660292 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.744755 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.744844 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.745138 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.745505 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.745569 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.840751 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.848846 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.848896 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.848917 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.848944 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.848962 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.861023 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.888791 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.909682 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.932566 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953052 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953758 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953774 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953795 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.953809 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:21Z","lastTransitionTime":"2026-02-04T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.975736 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:21 crc kubenswrapper[4644]: I0204 08:42:21.994489 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:21Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.013294 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.028067 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.048511 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:22 crc kubenswrapper[4644]: E0204 08:42:22.048702 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:22 crc kubenswrapper[4644]: E0204 08:42:22.048791 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:42:38.048765336 +0000 UTC m=+68.088823101 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.053946 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.058080 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.058137 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.058153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.058180 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.058197 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.079802 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.099030 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.116899 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.129295 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.141524 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.154904 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.160375 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.160413 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.160423 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.160440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.160452 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.169123 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.190053 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:22Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.263580 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.263642 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.263664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.263693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.263715 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.366155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.366448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.366579 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.366672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.366760 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.470049 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.470153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.470184 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.470216 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.470237 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.573390 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.573458 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.573469 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.573484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.573512 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.649250 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:38:47.054906362 +0000 UTC Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.676287 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.676391 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.676409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.676432 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.676449 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.779430 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.779487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.779503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.779522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.779535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.882257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.882296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.882304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.882318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.882342 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.984873 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.984921 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.984934 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.984952 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:22 crc kubenswrapper[4644]: I0204 08:42:22.984963 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:22Z","lastTransitionTime":"2026-02-04T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.087651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.087989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.088144 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.088312 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.088525 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.192589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.192667 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.192690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.192716 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.192735 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.295888 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.295959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.295976 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.295999 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.296011 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.399377 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.399434 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.399449 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.399475 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.399488 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.502029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.502565 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.502751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.502941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.503125 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.606734 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.606795 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.606807 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.606825 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.606837 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.650314 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:57:38.414431616 +0000 UTC Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.659847 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.659905 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.659968 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.660079 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:23 crc kubenswrapper[4644]: E0204 08:42:23.660210 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:23 crc kubenswrapper[4644]: E0204 08:42:23.660440 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:23 crc kubenswrapper[4644]: E0204 08:42:23.660486 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.661189 4644 scope.go:117] "RemoveContainer" containerID="46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546" Feb 04 08:42:23 crc kubenswrapper[4644]: E0204 08:42:23.661418 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.710600 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.711129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.711190 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.711225 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.711246 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.814732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.814791 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.814813 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.814841 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.814858 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.917950 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.917985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.917996 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.918012 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:23 crc kubenswrapper[4644]: I0204 08:42:23.918023 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:23Z","lastTransitionTime":"2026-02-04T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.022430 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.022471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.022486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.022504 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.022516 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.065583 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/1.log" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.070170 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.071177 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.092822 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.115572 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.125060 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.125146 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.125166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.125191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.125234 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.135773 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.151266 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.174305 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.191533 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.285756 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.285800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.285812 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.285830 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.285840 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.288537 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.301474 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.322114 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.348287 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.363258 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.381668 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.388106 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.388137 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.388147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.388161 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.388172 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.396760 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.410395 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.423517 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.442260 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.453008 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.466039 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:24Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.490048 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.490089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.490112 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.490127 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.490136 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.593318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.593372 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.593381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.593398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.593407 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.650569 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:53:48.76101277 +0000 UTC Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.695973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.696031 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.696077 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.696101 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.696131 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.798566 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.798604 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.798616 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.798634 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.798647 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.901728 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.901780 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.901799 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.901820 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:24 crc kubenswrapper[4644]: I0204 08:42:24.901838 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:24Z","lastTransitionTime":"2026-02-04T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.004082 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.004163 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.004175 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.004213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.004226 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.076571 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/2.log" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.077507 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/1.log" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.081545 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" exitCode=1 Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.081611 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.081674 4644 scope.go:117] "RemoveContainer" containerID="46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.082484 4644 scope.go:117] "RemoveContainer" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" Feb 04 08:42:25 crc kubenswrapper[4644]: E0204 08:42:25.083050 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.107099 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.107173 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.107197 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.107230 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.107252 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.113486 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.134641 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.152108 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.170443 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.200308 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.210199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.210229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.210243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.210278 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.210295 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.215557 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.232087 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.247865 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.262715 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.287199 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.301871 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.313105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.313177 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.313190 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.313206 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.313252 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.315098 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.329361 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.342699 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.354391 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.376029 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46edf564215e27a7612aa7f19e080569551288d1616cc7e8755146db415b7546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"message\\\":\\\".0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039248 5997 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0204 08:42:05.039927 5997 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:05.039999 5997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:05.040126 5997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.385675 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.397942 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:25Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.415947 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.415982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.415992 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.416005 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.416013 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.519179 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.519209 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.519217 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.519229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.519238 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.622199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.622259 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.622274 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.622296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.622311 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.651608 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:20:13.835457811 +0000 UTC Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.658942 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.658966 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.659001 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.659080 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:25 crc kubenswrapper[4644]: E0204 08:42:25.659101 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:25 crc kubenswrapper[4644]: E0204 08:42:25.659198 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:25 crc kubenswrapper[4644]: E0204 08:42:25.659358 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:25 crc kubenswrapper[4644]: E0204 08:42:25.659454 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.725158 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.725212 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.725229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.725252 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.725270 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.828028 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.828301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.828487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.828638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.828799 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.931650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.932021 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.932036 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.932056 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:25 crc kubenswrapper[4644]: I0204 08:42:25.932075 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:25Z","lastTransitionTime":"2026-02-04T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.035011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.035064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.035084 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.035108 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.035128 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.087861 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/2.log" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.093935 4644 scope.go:117] "RemoveContainer" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" Feb 04 08:42:26 crc kubenswrapper[4644]: E0204 08:42:26.094249 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.136852 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.139294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.139402 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.139428 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.139454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.139473 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.158623 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.180834 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.202812 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.219937 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.236160 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.241678 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.241714 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.241725 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.241742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.241755 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.257895 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.297424 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.317897 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.340159 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.344707 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.344762 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.344779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.344802 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.344818 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.402968 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.440515 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.447501 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.447545 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.447559 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.447577 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.447589 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.454628 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.464640 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.473947 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.486639 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.498947 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.509125 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:26Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.549752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.549808 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.549818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.549832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.549841 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.651731 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:37:50.013987908 +0000 UTC Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.652016 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.652035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.652044 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.652056 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.652065 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.754875 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.754939 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.754957 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.754981 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.754998 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.857860 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.857982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.858052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.858127 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.858155 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.961130 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.961192 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.961211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.961234 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:26 crc kubenswrapper[4644]: I0204 08:42:26.961253 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:26Z","lastTransitionTime":"2026-02-04T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.064450 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.064608 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.064635 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.064708 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.064737 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.168447 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.168519 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.168537 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.168563 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.168582 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.271733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.271800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.271823 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.271852 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.271877 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.374773 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.374820 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.374829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.374843 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.374860 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.477726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.477789 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.477806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.477829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.477846 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.581448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.581514 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.581534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.581558 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.581577 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.652522 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:44:41.081941496 +0000 UTC Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.658814 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.658875 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.658827 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:27 crc kubenswrapper[4644]: E0204 08:42:27.658979 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.659082 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:27 crc kubenswrapper[4644]: E0204 08:42:27.659212 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:27 crc kubenswrapper[4644]: E0204 08:42:27.659270 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:27 crc kubenswrapper[4644]: E0204 08:42:27.659447 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.684207 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.684297 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.684318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.684381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.684402 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.787220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.787290 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.787315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.787375 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.787399 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.890696 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.890770 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.890791 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.890818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.890840 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.994796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.994881 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.994899 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.994922 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:27 crc kubenswrapper[4644]: I0204 08:42:27.994940 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:27Z","lastTransitionTime":"2026-02-04T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.097926 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.097963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.097972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.097989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.097998 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.201962 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.202002 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.202010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.202024 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.202032 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.305626 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.305740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.305855 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.305910 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.305934 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.409605 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.409673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.409692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.409718 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.409742 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.513440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.513530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.513551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.513587 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.513607 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.617169 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.617236 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.617255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.617279 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.617298 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.653421 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:06:36.546105322 +0000 UTC Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.722083 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.722153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.722165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.722224 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.722249 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.825846 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.825917 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.825937 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.825963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.825981 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.928909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.928961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.928972 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.928989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:28 crc kubenswrapper[4644]: I0204 08:42:28.929001 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:28Z","lastTransitionTime":"2026-02-04T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.031230 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.031294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.031316 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.031362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.031375 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.134959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.135065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.135085 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.135167 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.135267 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.238356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.238403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.238417 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.238434 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.238446 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.343893 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.343949 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.343965 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.343988 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.344004 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.447091 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.447155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.447172 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.447198 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.447215 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.550765 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.550833 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.550850 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.550874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.550895 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.653642 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:44:48.523293207 +0000 UTC Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.654589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.654658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.654678 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.654710 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.654730 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.659866 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.660005 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:29 crc kubenswrapper[4644]: E0204 08:42:29.660052 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.659974 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:29 crc kubenswrapper[4644]: E0204 08:42:29.660197 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.660124 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:29 crc kubenswrapper[4644]: E0204 08:42:29.660515 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:29 crc kubenswrapper[4644]: E0204 08:42:29.661620 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.757393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.757451 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.757469 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.757493 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.757511 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.860226 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.860272 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.860288 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.860307 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.860322 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.963484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.963562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.963580 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.963603 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:29 crc kubenswrapper[4644]: I0204 08:42:29.963621 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:29Z","lastTransitionTime":"2026-02-04T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.066317 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.066400 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.066418 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.066441 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.066458 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.169628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.169693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.169711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.169742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.169759 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.272744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.272785 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.272798 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.272815 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.272828 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.287021 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.287078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.287103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.287126 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.287140 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.303055 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.307901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.307941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.307956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.307979 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.307996 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.320962 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.324705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.324744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.324753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.324768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.324778 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.343489 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.346239 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.346304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.346319 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.346351 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.346364 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.356780 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.359365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.359392 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.359401 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.359414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.359423 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.370227 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: E0204 08:42:30.370351 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.374720 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.374761 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.374773 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.374791 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.374804 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.477698 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.477757 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.477782 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.477813 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.477837 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.581679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.581759 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.581781 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.581812 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.581841 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.654423 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:34:40.461910094 +0000 UTC Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.679483 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.684538 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.684583 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.684595 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.684616 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.684627 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.701656 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.720282 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.744275 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.758401 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.777800 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.786985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.787012 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.787022 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.787039 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.787050 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.794735 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.807739 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.828248 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.838017 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.852173 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.863944 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.878843 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.890747 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.891232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.891268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.891280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.891296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.891310 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.906437 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.935759 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.952449 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.963613 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:30Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.994886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.994933 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.994949 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.994967 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:30 crc kubenswrapper[4644]: I0204 08:42:30.994979 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:30Z","lastTransitionTime":"2026-02-04T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.098636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.098701 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.098725 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.098754 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.098776 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.202100 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.202149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.202167 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.202191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.202209 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.305435 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.305509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.305532 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.305563 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.305586 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.408436 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.408483 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.408494 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.408510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.408522 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.511858 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.511922 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.511940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.511963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.511981 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.614789 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.614845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.614865 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.614888 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.614905 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.655085 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:00:55.780383093 +0000 UTC Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.659690 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.659761 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.659712 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.659841 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:31 crc kubenswrapper[4644]: E0204 08:42:31.659978 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:31 crc kubenswrapper[4644]: E0204 08:42:31.660116 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:31 crc kubenswrapper[4644]: E0204 08:42:31.660218 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:31 crc kubenswrapper[4644]: E0204 08:42:31.660260 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.718721 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.718791 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.718809 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.718835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.718852 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.822553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.822590 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.822611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.822625 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.822634 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.926476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.926534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.926553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.926577 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:31 crc kubenswrapper[4644]: I0204 08:42:31.926594 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:31Z","lastTransitionTime":"2026-02-04T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.029302 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.029377 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.029393 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.029411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.029424 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.132929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.133001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.133019 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.133043 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.133065 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.235927 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.235975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.235992 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.236015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.236031 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.339500 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.339562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.339585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.339682 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.339710 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.443044 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.443141 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.443161 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.443224 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.443244 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.545642 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.545692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.545704 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.545724 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.545735 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.648164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.648200 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.648238 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.648254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.648267 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.656196 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:12:47.449427413 +0000 UTC Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.751732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.751806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.751825 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.751853 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.751871 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.854598 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.854636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.854649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.854669 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.854683 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.957703 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.957742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.957753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.957771 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:32 crc kubenswrapper[4644]: I0204 08:42:32.957785 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:32Z","lastTransitionTime":"2026-02-04T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.059730 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.059787 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.059809 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.059837 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.059859 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.163217 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.163274 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.163285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.163300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.163310 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.266200 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.266235 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.266246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.266262 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.266272 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.369400 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.369509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.369534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.369562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.369583 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.473247 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.473311 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.473352 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.473376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.473394 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.577220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.577285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.577303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.577358 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.577377 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.657258 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:15:42.83789507 +0000 UTC Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.659729 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.659736 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.659870 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:33 crc kubenswrapper[4644]: E0204 08:42:33.660058 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.660448 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:33 crc kubenswrapper[4644]: E0204 08:42:33.660614 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:33 crc kubenswrapper[4644]: E0204 08:42:33.660934 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:33 crc kubenswrapper[4644]: E0204 08:42:33.660797 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.680785 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.681015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.681033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.681057 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.681073 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.783566 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.783609 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.783620 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.783635 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.783645 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.886764 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.886829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.886850 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.886879 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.886899 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.989387 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.989440 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.989459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.989482 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:33 crc kubenswrapper[4644]: I0204 08:42:33.989500 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:33Z","lastTransitionTime":"2026-02-04T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.092003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.092050 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.092064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.092082 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.092096 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.195461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.195540 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.195554 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.195602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.195618 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.298070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.298292 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.298394 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.298465 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.298524 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.402055 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.402371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.402459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.402541 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.402610 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.506580 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.506634 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.506649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.506671 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.506712 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.609740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.609941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.610038 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.610116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.610178 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.658563 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:52:32.596387691 +0000 UTC Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.714438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.714497 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.714506 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.714524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.714535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.816980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.817035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.817046 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.817064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.817093 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.920003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.920053 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.920064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.920079 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:34 crc kubenswrapper[4644]: I0204 08:42:34.920088 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:34Z","lastTransitionTime":"2026-02-04T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.022947 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.023226 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.023300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.023396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.023470 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.126562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.126605 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.126617 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.126634 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.126644 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.228570 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.228864 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.228929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.228997 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.229061 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.330938 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.331195 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.331269 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.331354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.331438 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.434617 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.434682 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.434702 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.434726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.434743 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.537192 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.537254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.537276 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.537302 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.537324 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.640249 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.640381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.640410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.640443 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.640466 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.659435 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:51:57.033807533 +0000 UTC Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.659603 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.659630 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:35 crc kubenswrapper[4644]: E0204 08:42:35.659776 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.659862 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.659926 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:35 crc kubenswrapper[4644]: E0204 08:42:35.660041 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:35 crc kubenswrapper[4644]: E0204 08:42:35.660162 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:35 crc kubenswrapper[4644]: E0204 08:42:35.660263 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.743129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.743186 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.743197 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.743215 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.743230 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.846311 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.846753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.846783 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.846816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.846840 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.950230 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.950284 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.950300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.950322 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:35 crc kubenswrapper[4644]: I0204 08:42:35.950360 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:35Z","lastTransitionTime":"2026-02-04T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.052481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.052530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.052543 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.052559 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.052569 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.154969 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.155000 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.155010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.155024 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.155034 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.256978 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.257018 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.257027 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.257042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.257050 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.359349 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.359383 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.359392 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.359405 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.359414 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.462500 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.462534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.462542 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.462557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.462565 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.564948 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.564979 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.564989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.565003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.565049 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.660010 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:06:38.141442826 +0000 UTC Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.667014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.667065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.667085 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.667106 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.667123 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.769468 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.769504 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.769515 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.769530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.769565 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.871693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.871722 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.871753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.871778 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.871790 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.974207 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.974241 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.974254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.974268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:36 crc kubenswrapper[4644]: I0204 08:42:36.974279 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:36Z","lastTransitionTime":"2026-02-04T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.076255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.076281 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.076293 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.076307 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.076318 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.177932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.177970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.177980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.177993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.178038 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.280796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.281070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.281155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.281229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.281300 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.383911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.383945 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.383955 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.383987 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.383998 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.486556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.486615 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.486632 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.486657 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.486681 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.588562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.588630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.588644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.588665 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.588684 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.658875 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.658931 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.658889 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.658886 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:37 crc kubenswrapper[4644]: E0204 08:42:37.659007 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:37 crc kubenswrapper[4644]: E0204 08:42:37.659126 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:37 crc kubenswrapper[4644]: E0204 08:42:37.659177 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:37 crc kubenswrapper[4644]: E0204 08:42:37.659255 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.660966 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:07:46.63581157 +0000 UTC Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.690117 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.690140 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.690148 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.690178 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.690187 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.792411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.792463 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.792480 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.792503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.792523 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.895365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.895425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.895450 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.895482 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.895510 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.997557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.997595 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.997605 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.997619 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:37 crc kubenswrapper[4644]: I0204 08:42:37.997629 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:37Z","lastTransitionTime":"2026-02-04T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.061345 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:38 crc kubenswrapper[4644]: E0204 08:42:38.061467 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:38 crc kubenswrapper[4644]: E0204 08:42:38.061521 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:43:10.061506712 +0000 UTC m=+100.101564467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.103684 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.103736 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.103748 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.103765 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.103778 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.206089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.206144 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.206162 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.206187 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.206204 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.308710 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.308760 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.308771 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.308787 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.308800 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.411037 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.411068 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.411076 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.411089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.411098 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.513263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.513297 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.513306 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.513336 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.513347 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.615457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.615500 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.615512 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.615528 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.615540 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.661864 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:16:29.50802899 +0000 UTC Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.718557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.718609 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.718618 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.718630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.718640 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.820990 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.821042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.821061 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.821083 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.821099 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.922903 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.922932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.922940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.922952 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:38 crc kubenswrapper[4644]: I0204 08:42:38.922961 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:38Z","lastTransitionTime":"2026-02-04T08:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.025366 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.025400 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.025410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.025425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.025435 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.128123 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.128164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.128175 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.128191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.128201 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.231122 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.231156 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.231185 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.231208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.231223 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.333404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.333438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.333449 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.333463 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.333475 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.435282 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.435359 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.435376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.435399 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.435417 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.537928 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.537976 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.537987 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.538005 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.538018 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.640183 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.640232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.640245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.640260 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.640272 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.659486 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.659556 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.659637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:39 crc kubenswrapper[4644]: E0204 08:42:39.659726 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:39 crc kubenswrapper[4644]: E0204 08:42:39.659854 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.659931 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:39 crc kubenswrapper[4644]: E0204 08:42:39.659964 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:39 crc kubenswrapper[4644]: E0204 08:42:39.660098 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.662705 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:46:14.947989376 +0000 UTC Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.743478 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.743523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.743532 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.743548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.743556 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.846611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.846654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.846664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.846682 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.846693 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.949533 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.949575 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.949586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.949602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:39 crc kubenswrapper[4644]: I0204 08:42:39.949614 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:39Z","lastTransitionTime":"2026-02-04T08:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.052520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.052592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.052614 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.052644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.052667 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155143 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/0.log" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155199 4644 generic.go:334] "Generic (PLEG): container finished" podID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" containerID="3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5" exitCode=1 Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155230 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerDied","Data":"3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155640 4644 scope.go:117] "RemoveContainer" containerID="3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155937 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155964 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155974 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.155988 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.156000 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.181929 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.208036 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.227455 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.242796 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.255286 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.259567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.259638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.259653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.259674 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.259715 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.273155 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.289983 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.319041 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.330569 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.352631 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.362152 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.362175 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.362183 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.362196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.362204 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.365773 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.375713 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.386777 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.396236 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.406361 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.415245 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.430903 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.438988 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.465142 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.465214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.465225 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.465245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.465260 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.530168 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.530227 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.530248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.530276 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.530294 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.547751 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.551651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.551705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.551726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.551749 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.551767 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.568882 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.572354 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.572422 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.572437 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.572461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.572483 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.590144 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.593388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.593428 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.593439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.593455 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.593466 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.609860 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.613169 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.613202 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.613214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.613228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.613237 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.625117 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.625252 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.626723 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.626751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.626761 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.626774 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.626784 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.660191 4644 scope.go:117] "RemoveContainer" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" Feb 04 08:42:40 crc kubenswrapper[4644]: E0204 08:42:40.660399 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.662880 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:15:29.389725772 +0000 UTC Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.670485 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.680823 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.700127 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.712910 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.725833 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.728711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.728742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.728752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.728766 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.728776 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.741969 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.753601 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.771850 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.784149 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.804855 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.815757 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.829930 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.831865 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.831900 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.831909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.831923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.831931 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.842647 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.856856 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.870991 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.880298 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.893975 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.903906 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:40Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.933936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.933970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.933980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.933994 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:40 crc kubenswrapper[4644]: I0204 08:42:40.934003 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:40Z","lastTransitionTime":"2026-02-04T08:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.036164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.036199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.036209 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.036226 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.036238 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.138786 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.138831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.138842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.138859 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.138870 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.160137 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/0.log" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.160207 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerStarted","Data":"842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.181042 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.192735 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.204799 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.216673 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.227919 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.240900 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.242347 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.242371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.242379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.242392 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.242400 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.251423 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.264445 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.277818 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.298065 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.311003 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.323453 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.337693 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.346651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.346678 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.346689 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.346703 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.346714 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.349511 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.361481 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.371555 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.387170 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.398282 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:41Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.449522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.449563 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.449573 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.449588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.449597 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.551705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.551744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.551752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.551769 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.551777 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.654232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.654263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.654273 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.654287 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.654297 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.659009 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:41 crc kubenswrapper[4644]: E0204 08:42:41.659095 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.659235 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:41 crc kubenswrapper[4644]: E0204 08:42:41.659291 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.659424 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:41 crc kubenswrapper[4644]: E0204 08:42:41.659470 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.659693 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:41 crc kubenswrapper[4644]: E0204 08:42:41.659947 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.663680 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:59:26.599019089 +0000 UTC Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.756890 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.756959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.756979 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.757002 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.757019 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.859271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.859358 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.859378 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.859410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.859431 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.962750 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.962827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.962847 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.962878 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:41 crc kubenswrapper[4644]: I0204 08:42:41.962906 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:41Z","lastTransitionTime":"2026-02-04T08:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.065852 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.065932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.065950 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.065977 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.065996 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.168714 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.168742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.168753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.168768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.168777 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.270813 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.270849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.270859 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.270875 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.270884 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.375198 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.375263 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.375282 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.375369 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.375408 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.479025 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.479072 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.479082 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.479103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.479118 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.582486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.582548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.582568 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.582593 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.582608 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.663793 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:37:29.305269961 +0000 UTC Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.670347 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.685632 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.685667 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.685681 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.685699 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.685710 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.789089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.789145 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.789155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.789170 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.789188 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.892192 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.892240 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.892251 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.892274 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.892287 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.995905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.995959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.995971 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.995994 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:42 crc kubenswrapper[4644]: I0204 08:42:42.996011 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:42Z","lastTransitionTime":"2026-02-04T08:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.098837 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.098869 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.098877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.098889 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.098899 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.201311 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.201398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.201415 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.201441 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.201459 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.304228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.304278 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.304293 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.304315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.304345 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.407087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.407139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.407149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.407165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.407175 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.509650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.509683 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.509692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.509705 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.509714 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.612287 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.612396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.612414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.612438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.612456 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.659295 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.659405 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.659470 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.659312 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:43 crc kubenswrapper[4644]: E0204 08:42:43.659594 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:43 crc kubenswrapper[4644]: E0204 08:42:43.659721 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:43 crc kubenswrapper[4644]: E0204 08:42:43.659805 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:43 crc kubenswrapper[4644]: E0204 08:42:43.659971 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.664517 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:02:56.196415503 +0000 UTC Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.714745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.714787 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.714798 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.714815 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.714826 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.817360 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.817398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.817407 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.817421 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.817430 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.920715 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.920793 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.920819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.921011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:43 crc kubenswrapper[4644]: I0204 08:42:43.921064 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:43Z","lastTransitionTime":"2026-02-04T08:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.024850 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.024936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.024961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.024993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.025027 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.128264 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.128307 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.128315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.128345 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.128356 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.230921 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.230960 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.230975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.230994 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.231005 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.333318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.333390 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.333399 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.333412 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.333423 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.436142 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.436201 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.436218 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.436243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.436261 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.538662 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.538730 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.538755 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.538786 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.538809 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.643223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.643265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.643276 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.643292 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.643303 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.665625 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:41:47.362452245 +0000 UTC Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.746510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.746544 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.746554 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.746571 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.746583 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.849911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.849958 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.849975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.850001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.850021 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.952689 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.953005 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.953215 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.953425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:44 crc kubenswrapper[4644]: I0204 08:42:44.953584 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:44Z","lastTransitionTime":"2026-02-04T08:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.056471 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.056532 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.056551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.056574 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.056591 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.159435 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.159486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.159503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.159526 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.159547 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.264145 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.264218 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.264243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.264271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.264295 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.368140 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.368204 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.368216 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.368233 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.368246 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.470126 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.470181 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.470198 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.470221 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.470238 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.573258 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.573300 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.573312 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.573352 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.573364 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.659111 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.659139 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.659205 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:45 crc kubenswrapper[4644]: E0204 08:42:45.659233 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.659299 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:45 crc kubenswrapper[4644]: E0204 08:42:45.659404 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:45 crc kubenswrapper[4644]: E0204 08:42:45.659513 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:45 crc kubenswrapper[4644]: E0204 08:42:45.659639 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.666399 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:45:22.357247454 +0000 UTC Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.676296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.676403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.676431 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.676475 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.676494 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.779406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.779457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.779472 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.779489 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.779503 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.882768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.882825 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.882842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.882865 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.882881 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.984893 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.984936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.984952 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.984974 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:45 crc kubenswrapper[4644]: I0204 08:42:45.984993 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:45Z","lastTransitionTime":"2026-02-04T08:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.087831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.087872 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.087889 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.087911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.087927 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.190404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.190459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.190481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.190510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.190533 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.293732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.293794 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.293817 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.293845 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.293865 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.396975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.397032 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.397056 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.397088 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.397107 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.500398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.500459 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.500470 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.500484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.500496 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.604147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.604232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.604265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.604294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.604314 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.666588 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:19:05.645932867 +0000 UTC Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.707177 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.707238 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.707255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.707277 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.707298 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.814914 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.814986 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.815047 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.815076 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.815095 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.917782 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.917868 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.917886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.917941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:46 crc kubenswrapper[4644]: I0204 08:42:46.917960 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:46Z","lastTransitionTime":"2026-02-04T08:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.020867 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.020923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.020939 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.020961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.020979 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.123373 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.123422 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.123433 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.123450 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.123460 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.226334 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.226379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.226388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.226405 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.226414 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.331204 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.331271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.331293 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.331322 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.331378 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.434193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.434254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.434271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.434296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.434313 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.537900 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.537961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.537982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.538009 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.538027 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.641481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.641520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.641530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.641546 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.641557 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.659150 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.659198 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.659173 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.659159 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:47 crc kubenswrapper[4644]: E0204 08:42:47.659355 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:47 crc kubenswrapper[4644]: E0204 08:42:47.659491 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:47 crc kubenswrapper[4644]: E0204 08:42:47.659632 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:47 crc kubenswrapper[4644]: E0204 08:42:47.659801 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.666792 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:44:25.403135937 +0000 UTC Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.745588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.745638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.745649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.745667 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.745679 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.848654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.848707 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.848727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.848754 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.848772 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.951799 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.951924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.951940 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.951959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:47 crc kubenswrapper[4644]: I0204 08:42:47.951971 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:47Z","lastTransitionTime":"2026-02-04T08:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.057007 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.057064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.057081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.057107 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.057124 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.165308 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.165438 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.165472 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.165511 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.165539 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.268136 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.268200 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.268212 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.268231 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.268246 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.370499 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.370556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.370573 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.370596 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.370613 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.474141 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.474198 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.474217 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.474276 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.474297 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.577603 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.577683 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.577707 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.577739 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.577762 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.667396 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:49:38.226583877 +0000 UTC Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.680915 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.680982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.681003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.681029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.681046 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.783727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.783766 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.783801 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.783821 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.783830 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.886306 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.886406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.886426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.886452 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.886471 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.989285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.989351 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.989365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.989380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:48 crc kubenswrapper[4644]: I0204 08:42:48.989392 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:48Z","lastTransitionTime":"2026-02-04T08:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.092403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.092462 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.092481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.092509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.092528 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.195977 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.196032 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.196052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.196089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.196111 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.298425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.298541 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.298568 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.298679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.298698 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.401828 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.401877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.401891 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.401909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.401922 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.505290 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.505375 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.505394 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.505417 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.505433 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.608453 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.608502 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.608518 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.608540 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.608557 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.658828 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:49 crc kubenswrapper[4644]: E0204 08:42:49.659028 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.659296 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:49 crc kubenswrapper[4644]: E0204 08:42:49.659434 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.659636 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:49 crc kubenswrapper[4644]: E0204 08:42:49.659740 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.659932 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:49 crc kubenswrapper[4644]: E0204 08:42:49.660023 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.668419 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:00:08.112214801 +0000 UTC Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.711594 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.711682 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.711703 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.711731 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.711753 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.814139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.814169 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.814180 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.814195 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.814211 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.917193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.917237 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.917249 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.917265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:49 crc kubenswrapper[4644]: I0204 08:42:49.917278 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:49Z","lastTransitionTime":"2026-02-04T08:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.020077 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.020114 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.020122 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.020133 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.020142 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.123473 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.123532 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.123548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.123572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.123591 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.228769 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.228816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.228832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.228854 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.228869 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.332043 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.332112 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.332134 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.332164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.332186 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.435165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.435187 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.435195 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.435208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.435216 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.539037 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.539102 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.539123 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.539149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.539170 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.642593 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.642621 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.642630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.642641 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.642650 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.659390 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.659444 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.659466 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.659490 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.659506 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.669097 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:14:02.017145621 +0000 UTC Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.673569 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.677613 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.683062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.683101 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.683114 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.683132 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.683144 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.695280 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.696235 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.702179 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.702215 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.702223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.702238 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.702252 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.712677 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.716084 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.720383 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.720509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.720595 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.720677 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.720749 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.726138 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.735868 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.739878 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.745966 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.746003 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.746019 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.746034 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.746044 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.763985 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.767120 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: E0204 08:42:50.767371 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.769640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.769672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.769684 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.769701 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.769712 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.778999 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.790887 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.810089 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.822579 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.835349 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.847371 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.863612 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.872313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.872367 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.872379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.872397 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.872408 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.876251 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.892271 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.906264 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.919976 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.935954 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.950277 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:50Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.975138 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.975289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.975313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.975416 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:50 crc kubenswrapper[4644]: I0204 08:42:50.975435 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:50Z","lastTransitionTime":"2026-02-04T08:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.077549 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.077584 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.077594 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.077608 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.077619 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.181740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.181990 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.182068 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.182177 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.182267 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.286186 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.286458 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.286481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.286503 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.286518 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.389033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.389105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.389123 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.389150 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.389167 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.492031 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.492088 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.492099 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.492113 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.492140 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.596025 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.596071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.596083 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.596101 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.596113 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.659151 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:51 crc kubenswrapper[4644]: E0204 08:42:51.659384 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.659650 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:51 crc kubenswrapper[4644]: E0204 08:42:51.659755 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.659952 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:51 crc kubenswrapper[4644]: E0204 08:42:51.660057 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.660258 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:51 crc kubenswrapper[4644]: E0204 08:42:51.660383 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.669628 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:10:24.6046618 +0000 UTC Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.699071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.699315 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.699444 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.699561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.699649 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.802891 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.803209 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.803314 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.803509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.803597 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.906095 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.906167 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.906185 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.906209 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:51 crc kubenswrapper[4644]: I0204 08:42:51.906227 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:51Z","lastTransitionTime":"2026-02-04T08:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.009097 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.009148 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.009165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.009189 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.009207 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.112550 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.112710 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.112739 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.112769 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.112791 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.214699 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.214732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.214740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.214753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.214763 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.317567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.317640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.317657 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.317683 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.317700 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.420050 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.420108 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.420125 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.420147 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.420164 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.527027 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.527732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.528402 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.528685 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.528913 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.631989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.632412 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.632547 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.632712 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.632840 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.670651 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:55:16.023060052 +0000 UTC Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.738000 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.738052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.738062 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.738080 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.738094 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.840977 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.841026 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.841045 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.841066 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.841082 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.944233 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.944709 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.944941 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.945189 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:52 crc kubenswrapper[4644]: I0204 08:42:52.945412 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:52Z","lastTransitionTime":"2026-02-04T08:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.048058 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.048116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.048131 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.048150 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.048161 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.151531 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.151601 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.151623 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.151653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.151677 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.254513 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.254832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.254975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.255071 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.255149 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.364230 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.364289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.364306 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.364355 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.364374 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.467193 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.467236 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.467255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.467280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.467298 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.533857 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.534060 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.534105 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.534145 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.534184 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534437 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534470 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534490 4644 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534559 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.53453598 +0000 UTC m=+147.574593775 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534842 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.534827108 +0000 UTC m=+147.574884903 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534937 4644 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.534985 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.534971773 +0000 UTC m=+147.575029568 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535146 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535215 4644 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535238 4644 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535375 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.535305232 +0000 UTC m=+147.575363017 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535570 4644 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.535673 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.535644592 +0000 UTC m=+147.575702437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.570428 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.570469 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.570486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.570506 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.570521 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.659125 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.659172 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.659209 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.659190 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.659262 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.659469 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.659625 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:53 crc kubenswrapper[4644]: E0204 08:42:53.659799 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.671235 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:58:24.359116784 +0000 UTC Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.673671 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.673726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.673749 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.673779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.673802 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.776465 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.776534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.776553 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.776578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.776599 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.880067 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.880129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.880149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.880257 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.880288 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.984233 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.984287 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.984306 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.984369 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:53 crc kubenswrapper[4644]: I0204 08:42:53.984387 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:53Z","lastTransitionTime":"2026-02-04T08:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.087589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.087652 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.087669 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.087692 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.087711 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.191473 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.191869 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.191886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.192052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.192073 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.296212 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.296268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.296286 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.296309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.296361 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.400222 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.400708 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.400913 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.401152 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.401412 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.505265 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.505371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.505396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.505427 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.505447 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.608082 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.608146 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.608170 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.608197 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.608213 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.672178 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:03:17.303386133 +0000 UTC Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.711672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.711737 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.711754 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.711779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.711796 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.815191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.815234 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.815246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.815266 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.815280 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.917916 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.918575 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.918778 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.918893 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:54 crc kubenswrapper[4644]: I0204 08:42:54.918994 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:54Z","lastTransitionTime":"2026-02-04T08:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.022444 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.022504 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.022521 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.022543 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.022559 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.124693 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.125001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.125077 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.125177 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.125260 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.227929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.227959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.227969 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.227982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.227992 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.331613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.331679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.331703 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.331729 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.331747 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.434755 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.434814 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.434831 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.434854 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.434871 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.537448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.537486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.537498 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.537513 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.537523 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.647561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.647903 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.648015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.648162 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.648315 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.659257 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.659766 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:55 crc kubenswrapper[4644]: E0204 08:42:55.659946 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.659997 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.659994 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:55 crc kubenswrapper[4644]: E0204 08:42:55.660883 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.660930 4644 scope.go:117] "RemoveContainer" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" Feb 04 08:42:55 crc kubenswrapper[4644]: E0204 08:42:55.660997 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:55 crc kubenswrapper[4644]: E0204 08:42:55.661222 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.672598 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:57:49.057271677 +0000 UTC Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.752249 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.752285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.752294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.752309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.752320 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.855636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.855726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.855739 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.855752 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.855763 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.959211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.959262 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.959275 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.959293 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:55 crc kubenswrapper[4644]: I0204 08:42:55.959306 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:55Z","lastTransitionTime":"2026-02-04T08:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.061592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.061643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.061658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.061680 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.061695 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.163597 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.163633 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.163644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.163659 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.163669 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.214660 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/2.log" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.217728 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.218171 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.233349 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.266225 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.266270 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.266281 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.266299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.266310 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.267807 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.284478 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.300972 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.315515 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.329188 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.341781 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.352351 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.368334 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.368371 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.368380 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.368410 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.368419 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.371729 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.383274 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.398794 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.412221 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.427659 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.442686 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.457671 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.470723 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.470790 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.470804 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.470828 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.470843 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.472476 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.483121 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.497364 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.508991 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:56Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.573454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.573486 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.573494 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.573530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.573542 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.673498 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:51:54.343461119 +0000 UTC Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.676651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.676751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.676816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.676849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.676869 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.780454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.781029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.781061 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.781094 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.781114 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.883921 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.884038 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.884058 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.884084 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.884102 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.987995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.988056 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.988078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.988105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:56 crc kubenswrapper[4644]: I0204 08:42:56.988126 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:56Z","lastTransitionTime":"2026-02-04T08:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.090470 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.090525 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.090565 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.090598 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.090619 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.193578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.193645 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.193668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.193698 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.193720 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.224899 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/3.log" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.226194 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/2.log" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.231892 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" exitCode=1 Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.232182 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.232467 4644 scope.go:117] "RemoveContainer" containerID="247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.233683 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:42:57 crc kubenswrapper[4644]: E0204 08:42:57.234029 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.255664 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.278888 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.298771 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.298842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.298868 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.298898 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.298922 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.299164 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.333382 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://247a24df416a6b1d4afd7fa12e89844ab4856a52a2b8cc65e5bbdc42b794d42e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:24Z\\\",\\\"message\\\":\\\"7 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 08:42:24.723853 6227 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 08:42:24.723916 6227 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 08:42:24.724730 6227 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 08:42:24.724771 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 08:42:24.724809 6227 factory.go:656] Stopping watch factory\\\\nI0204 08:42:24.724842 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 08:42:24.724854 6227 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 08:42:24.730401 6227 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0204 08:42:24.730452 6227 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0204 08:42:24.730528 6227 ovnkube.go:599] Stopped ovnkube\\\\nI0204 08:42:24.730583 6227 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:24.730687 6227 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:56Z\\\",\\\"message\\\":\\\"d syncing service metrics on namespace openshift-etcd-operator for network=default : 2.211265ms\\\\nI0204 08:42:56.512851 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0204 08:42:56.512752 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0204 08:42:56.512737 6650 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ckvx5 in node crc\\\\nI0204 08:42:56.512866 6650 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0204 08:42:56.512872 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-ckvx5 after 0 failed attempt(s)\\\\nI0204 08:42:56.512878 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-ckvx5\\\\nI0204 08:42:56.512881 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:56.512927 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.351236 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.370159 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.391702 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.402138 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.402246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.402267 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.402381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.402414 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.407976 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.429406 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.449045 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.466730 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.492605 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.505522 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.505588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.505612 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.505644 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.505668 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.513470 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.532217 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.548614 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.584260 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.604252 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.608829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.608860 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.608869 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.608884 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.608912 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.627636 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.646385 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:57Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.658842 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:57 crc kubenswrapper[4644]: E0204 08:42:57.659008 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.659247 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:57 crc kubenswrapper[4644]: E0204 08:42:57.659394 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.659659 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:57 crc kubenswrapper[4644]: E0204 08:42:57.659892 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.660221 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:57 crc kubenswrapper[4644]: E0204 08:42:57.660521 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.673834 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:08:23.813769513 +0000 UTC Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.713425 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.713554 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.713574 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.713597 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.713686 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.817688 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.817782 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.817804 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.818323 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.818416 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.921713 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.921985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.922002 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.922024 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:57 crc kubenswrapper[4644]: I0204 08:42:57.922040 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:57Z","lastTransitionTime":"2026-02-04T08:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.025927 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.025975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.025992 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.026015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.026032 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.130006 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.130125 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.130196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.130235 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.130309 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.233245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.233313 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.233372 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.233402 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.233424 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.239051 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/3.log" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.246264 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:42:58 crc kubenswrapper[4644]: E0204 08:42:58.246775 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.267414 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.291791 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.315754 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.368727 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:56Z\\\",\\\"message\\\":\\\"d syncing service metrics on namespace openshift-etcd-operator for network=default : 2.211265ms\\\\nI0204 08:42:56.512851 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0204 08:42:56.512752 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0204 08:42:56.512737 6650 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ckvx5 in node crc\\\\nI0204 08:42:56.512866 6650 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0204 08:42:56.512872 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-ckvx5 after 0 failed attempt(s)\\\\nI0204 08:42:56.512878 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-ckvx5\\\\nI0204 08:42:56.512881 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:56.512927 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.369936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.370192 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.370304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.370490 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.370697 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.385730 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.402157 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.421645 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.441661 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.461448 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.474437 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.474829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.474973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.475063 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.475151 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.479308 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.499552 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.513433 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.534407 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.550829 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.564779 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.578704 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.578759 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.578776 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.578802 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.578819 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.594683 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.612838 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.631786 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.651083 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:42:58Z is after 2025-08-24T17:21:41Z" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.674025 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:23:51.887583849 +0000 UTC Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.681750 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.681803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.681820 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.681882 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.681905 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.785190 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.785233 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.785245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.785262 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.785273 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.888414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.888472 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.888487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.888509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.888527 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.990602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.990640 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.990651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.990669 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:58 crc kubenswrapper[4644]: I0204 08:42:58.990682 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:58Z","lastTransitionTime":"2026-02-04T08:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.093896 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.093970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.093991 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.094018 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.094043 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.196803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.196862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.196882 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.196907 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.196926 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.299737 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.299801 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.299819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.299844 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.299864 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.403148 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.403203 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.403222 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.403255 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.403291 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.506022 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.506079 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.506097 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.506121 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.506138 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.617912 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.617970 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.617990 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.618018 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.618237 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.658968 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:42:59 crc kubenswrapper[4644]: E0204 08:42:59.659090 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.659105 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:42:59 crc kubenswrapper[4644]: E0204 08:42:59.659305 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.659590 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.659598 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:42:59 crc kubenswrapper[4644]: E0204 08:42:59.659807 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:42:59 crc kubenswrapper[4644]: E0204 08:42:59.659916 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.674501 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:22:02.0274002 +0000 UTC Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.720980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.721046 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.721059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.721078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.721093 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.824690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.824758 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.824772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.824795 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:42:59 crc kubenswrapper[4644]: I0204 08:42:59.824813 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:42:59.928096 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:42:59.928153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:42:59.928174 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:42:59.928199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:42:59.928217 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:42:59Z","lastTransitionTime":"2026-02-04T08:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.030430 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.030473 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.030487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.030518 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.030531 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.133848 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.133942 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.133971 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.134014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.134040 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.237038 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.237081 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.237092 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.237110 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.237122 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.340369 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.340405 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.340415 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.340430 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.340441 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.443918 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.443996 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.444015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.444042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.444062 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.547089 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.547153 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.547261 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.547290 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.547312 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.651216 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.651267 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.651285 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.651307 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.651323 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.674745 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:14:38.335425339 +0000 UTC Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.692712 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:56Z\\\",\\\"message\\\":\\\"d syncing service metrics on namespace openshift-etcd-operator for network=default : 2.211265ms\\\\nI0204 08:42:56.512851 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0204 08:42:56.512752 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0204 08:42:56.512737 6650 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ckvx5 in node crc\\\\nI0204 08:42:56.512866 6650 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0204 08:42:56.512872 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-ckvx5 after 0 failed attempt(s)\\\\nI0204 08:42:56.512878 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-ckvx5\\\\nI0204 08:42:56.512881 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:56.512927 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.712004 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.731479 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.751496 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.755103 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.755220 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.755244 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.755274 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.755297 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.772887 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.791323 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.808142 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.813596 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.813625 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.813636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.813654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.813666 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.835359 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.839726 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.845404 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.845481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.845509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.845540 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.845565 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.855820 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.866720 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.871185 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.871235 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.871248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.871271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.871298 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.875976 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.886359 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.890497 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.890571 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.890589 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.890610 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.890625 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.893760 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.906731 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.912024 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.912120 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.912140 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.912172 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.912191 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.913402 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.927506 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: E0204 08:43:00.927890 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.930800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.930837 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.930848 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.930871 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.930884 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:00Z","lastTransitionTime":"2026-02-04T08:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.931215 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.943708 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.957705 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.974399 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:00 crc kubenswrapper[4644]: I0204 08:43:00.989235 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:00Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.033345 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.033396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.033414 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.033436 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.033453 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.038721 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.073673 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:01Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.135185 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.135223 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.135232 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.135254 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.135264 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.238001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.238069 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.238093 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.238121 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.238144 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.341245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.341279 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.341291 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.341357 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.341369 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.443796 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.444164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.444308 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.444510 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.444636 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.547803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.547874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.547897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.547924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.547945 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.650736 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.651097 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.651289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.651611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.651815 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.659539 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:01 crc kubenswrapper[4644]: E0204 08:43:01.659997 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.659644 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:01 crc kubenswrapper[4644]: E0204 08:43:01.660504 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.659563 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:01 crc kubenswrapper[4644]: E0204 08:43:01.660923 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.659641 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:01 crc kubenswrapper[4644]: E0204 08:43:01.661396 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.675693 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:51:04.363200951 +0000 UTC Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.754777 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.754839 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.754858 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.754886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.754908 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.856979 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.857031 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.857041 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.857059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.857070 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.962685 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.962745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.962761 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.962785 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:01 crc kubenswrapper[4644]: I0204 08:43:01.962803 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:01Z","lastTransitionTime":"2026-02-04T08:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.065803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.065881 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.065907 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.065936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.065961 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.168923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.168964 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.168975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.168988 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.168996 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.271128 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.271169 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.271183 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.271201 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.271219 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.373409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.373461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.373477 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.373499 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.373516 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.476793 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.476843 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.476861 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.476884 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.476902 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.580905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.580967 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.580989 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.581016 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.581034 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.676744 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:10:24.760114247 +0000 UTC Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.683993 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.684067 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.684087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.684109 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.684126 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.787054 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.787117 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.787134 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.787161 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.787183 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.890855 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.890905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.890923 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.890945 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.890963 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.994672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.994735 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.994794 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.994824 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:02 crc kubenswrapper[4644]: I0204 08:43:02.994841 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:02Z","lastTransitionTime":"2026-02-04T08:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.097466 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.097526 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.097545 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.097573 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.097590 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.200837 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.200887 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.200904 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.200929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.200951 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.303758 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.303806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.303817 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.303835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.303850 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.407578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.407629 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.407641 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.407661 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.407672 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.520474 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.520567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.520586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.520613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.520631 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.624676 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.624712 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.624721 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.624739 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.624750 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.658891 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.659609 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:03 crc kubenswrapper[4644]: E0204 08:43:03.659901 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.659951 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.659929 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:03 crc kubenswrapper[4644]: E0204 08:43:03.660117 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:03 crc kubenswrapper[4644]: E0204 08:43:03.660220 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:03 crc kubenswrapper[4644]: E0204 08:43:03.660308 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.676847 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:21:52.748417615 +0000 UTC Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.727551 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.727588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.727600 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.727614 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.727626 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.830403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.830434 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.830442 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.830454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.830463 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.933706 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.934078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.934208 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.934362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:03 crc kubenswrapper[4644]: I0204 08:43:03.934530 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:03Z","lastTransitionTime":"2026-02-04T08:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.037767 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.037812 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.037827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.037849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.037864 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.140991 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.141485 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.141585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.141654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.141710 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.245722 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.245778 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.245795 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.245819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.245835 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.349155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.349528 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.349678 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.349880 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.350087 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.452673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.452715 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.452727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.452776 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.452791 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.555643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.555711 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.555733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.555757 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.555775 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.659280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.659369 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.659388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.659535 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.659572 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.677479 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:11:25.953517324 +0000 UTC Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.763035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.763426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.763574 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.763706 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.763834 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.868317 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.868676 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.868704 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.868736 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.868753 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.971931 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.972007 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.972026 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.972055 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:04 crc kubenswrapper[4644]: I0204 08:43:04.972081 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:04Z","lastTransitionTime":"2026-02-04T08:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.075539 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.075607 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.075628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.075653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.075671 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.179475 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.179530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.179549 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.179572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.179589 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.281982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.282044 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.282064 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.282087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.282107 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.384735 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.384822 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.384840 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.384897 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.384918 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.488216 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.488377 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.488409 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.488439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.488459 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.591535 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.591602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.591619 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.591642 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.591659 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.659319 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.659372 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.659354 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.659314 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:05 crc kubenswrapper[4644]: E0204 08:43:05.659474 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:05 crc kubenswrapper[4644]: E0204 08:43:05.659545 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:05 crc kubenswrapper[4644]: E0204 08:43:05.659593 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:05 crc kubenswrapper[4644]: E0204 08:43:05.659638 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.678208 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:33:45.176944447 +0000 UTC Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.694439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.694487 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.694498 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.694515 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.694528 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.798164 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.798213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.798226 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.798246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.798260 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.901957 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.902002 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.902018 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.902042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:05 crc kubenswrapper[4644]: I0204 08:43:05.902058 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:05Z","lastTransitionTime":"2026-02-04T08:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.004825 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.004942 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.004959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.004984 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.005001 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.108194 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.108243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.108256 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.108273 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.108285 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.210825 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.210913 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.210931 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.210957 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.210974 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.314866 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.314956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.314973 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.314995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.315010 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.422887 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.422930 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.422939 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.422954 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.422963 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.526001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.526115 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.526139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.526171 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.526195 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.629191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.629266 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.629290 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.629374 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.629400 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.678931 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:55:24.535052146 +0000 UTC Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.732767 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.732835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.732853 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.732878 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.732898 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.836384 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.836460 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.836474 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.836492 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.836535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.940570 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.940628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.940646 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.940674 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:06 crc kubenswrapper[4644]: I0204 08:43:06.940692 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:06Z","lastTransitionTime":"2026-02-04T08:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.043918 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.044000 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.044014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.044036 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.044073 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.147297 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.147428 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.147457 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.147524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.147551 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.253720 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.253760 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.253772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.253789 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.253800 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.356517 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.356575 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.356592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.356619 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.356638 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.459994 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.460035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.460046 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.460066 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.460077 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.562977 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.563051 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.563065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.563085 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.563096 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.659587 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.659600 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.659780 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.659848 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:07 crc kubenswrapper[4644]: E0204 08:43:07.660038 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:07 crc kubenswrapper[4644]: E0204 08:43:07.660213 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:07 crc kubenswrapper[4644]: E0204 08:43:07.660460 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:07 crc kubenswrapper[4644]: E0204 08:43:07.660618 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.666489 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.666544 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.666562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.666585 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.666601 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.680840 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:53:03.934790046 +0000 UTC Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.770236 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.770304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.770324 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.770396 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.770413 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.872742 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.872803 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.872821 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.872847 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.872871 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.976374 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.976458 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.976479 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.976502 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:07 crc kubenswrapper[4644]: I0204 08:43:07.976519 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:07Z","lastTransitionTime":"2026-02-04T08:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.080015 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.080070 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.080087 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.080108 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.080126 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.183142 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.183211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.183227 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.183250 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.183267 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.286505 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.286583 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.286602 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.286625 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.286642 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.389999 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.390059 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.390076 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.390100 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.390120 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.493319 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.493406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.493422 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.493446 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.493463 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.596840 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.596886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.596901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.596920 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.596935 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.681762 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:14:08.890874279 +0000 UTC Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.699175 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.699213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.699224 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.699242 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.699256 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.801843 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.801903 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.801927 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.801956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.801976 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.905630 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.905704 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.905725 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.905755 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:08 crc kubenswrapper[4644]: I0204 08:43:08.905792 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:08Z","lastTransitionTime":"2026-02-04T08:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.008663 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.008735 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.008753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.008777 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.008796 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.111055 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.111100 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.111113 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.111131 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.111146 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.214583 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.214636 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.214654 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.214679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.214696 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.317155 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.317187 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.317196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.317209 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.317217 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.465611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.465667 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.465679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.465696 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.465708 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.568604 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.568656 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.568673 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.568697 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.568715 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.659281 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.659318 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.659511 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.659318 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:09 crc kubenswrapper[4644]: E0204 08:43:09.659761 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:09 crc kubenswrapper[4644]: E0204 08:43:09.659898 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:09 crc kubenswrapper[4644]: E0204 08:43:09.659978 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:09 crc kubenswrapper[4644]: E0204 08:43:09.660068 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.671548 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.671607 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.671628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.671653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.671671 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.682012 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:22:06.976521755 +0000 UTC Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.774546 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.774588 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.774601 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.774622 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.774636 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.878163 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.878234 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.878258 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.878294 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.878317 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.981656 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.981710 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.981727 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.981750 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:09 crc kubenswrapper[4644]: I0204 08:43:09.981767 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:09Z","lastTransitionTime":"2026-02-04T08:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.085714 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.085775 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.085793 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.085817 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.085834 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.107673 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:10 crc kubenswrapper[4644]: E0204 08:43:10.107875 4644 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:43:10 crc kubenswrapper[4644]: E0204 08:43:10.107966 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs podName:b0c747eb-fe5e-4cad-a021-307cc2ed1ad5 nodeName:}" failed. No retries permitted until 2026-02-04 08:44:14.107940958 +0000 UTC m=+164.147998753 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs") pod "network-metrics-daemon-f6ghp" (UID: "b0c747eb-fe5e-4cad-a021-307cc2ed1ad5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.188726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.188805 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.188818 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.188834 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.188872 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.292366 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.292448 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.292465 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.292518 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.292537 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.395911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.396007 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.396027 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.396054 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.396072 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.499857 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.499963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.499982 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.500011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.500032 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.603318 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.603408 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.603426 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.603449 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.603466 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.678760 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2a87f38-c8a0-4007-b926-1dafb84e7483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9883e38f9140efd389d0e525c38e3c995a594205cb218245594dd85b66efb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6gjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwrck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.683346 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:08:03.835262565 +0000 UTC Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.697323 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hlsjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae67081-37de-4da9-8ebb-152cd341fcfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47eb2bdf9a83dda7b51fb28686672780e17ea6b97c485add98e3ccf672c278d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2hn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hlsjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.706289 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.706383 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.706406 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.706431 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.706449 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.725698 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee94f0f5-c35a-425f-8fbe-1b39b699bb0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abdc8015f23980591cab2f79ec055fe091f04528153eeffc1bd847b36f875abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d5b2379525af0ca9e7849065166edb027603a596e46a08eff328734e06551b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447910a2686f00798ad14133d5b7209205fc9f8aef7344000d68fc52fc7d267a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ec07f5b5f9c983e26d3ca115e0c90770a5806e96ca590b1fb0efa19f26a9d63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490121d18909326f409b2e530c4e89c5d7f690025e2bb017cec65d7f320fba90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6346559dcaae22071112da040daed66d3e037f19806e888b15863155d0c29c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acba65f600180e850d7b29c3e52367c7c45328b4710cdfaf1fff1532580a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjzfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n6jk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.745933 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xkztj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f6ghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.769650 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bc2b7d-ea50-4938-9c52-1d15d68aba83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"le observer\\\\nW0204 08:41:49.171794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 08:41:49.171899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 08:41:49.172521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1696489709/tls.crt::/tmp/serving-cert-1696489709/tls.key\\\\\\\"\\\\nI0204 08:41:49.647221 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 08:41:49.660200 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 08:41:49.660292 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 08:41:49.660361 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 08:41:49.660406 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 08:41:49.672700 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 08:41:49.672796 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 08:41:49.672852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 08:41:49.672878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 08:41:49.672903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 08:41:49.672927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 08:41:49.672730 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 08:41:49.679704 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.791494 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.811128 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.811199 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.811216 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.811241 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.811263 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.814536 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mszlj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7aa20f1c-0ad7-449e-a179-e246a52dfb2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:39Z\\\",\\\"message\\\":\\\"2026-02-04T08:41:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3\\\\n2026-02-04T08:41:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6255edf2-6b28-453a-8443-8c3b9fe25db3 to /host/opt/cni/bin/\\\\n2026-02-04T08:41:54Z [verbose] multus-daemon started\\\\n2026-02-04T08:41:54Z [verbose] Readiness Indicator file check\\\\n2026-02-04T08:42:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbptc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mszlj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.830623 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.848843 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37e2958d-0c33-4fd2-a696-d789be254111\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281d8503bb962c0fb6debaa9c165406a41444b14d49317c246dcd58252fbac30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://797a42b23b1bd822468cc1da1d4e468ee7620bf4b69e866d18cceeebd41c832b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h7zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x9rsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.865780 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6b4174c7f6b291e7937895f00dab6c46c00958fc8a6942bd13ac94f6ff5d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.879614 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c3d5abf0c29ff2fc8371b79863f9033b21a42e8f83fd248e6c6a8566f2928b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11271b5604103fb50362bf4ad39451f9e238612f4b3b88fe8272c7e33a564e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.890964 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5661fb-774f-400f-8a08-21e749365b53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac7d60c75f87eb470bf996682e426f2d3568c4cef2915d071ab07b92fa86d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9273095a30ec465020d3077f9397def4dffc5ce8c852a169beca1308e46de62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.914618 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.914662 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.914675 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.914694 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.914706 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:10Z","lastTransitionTime":"2026-02-04T08:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.924227 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b427e3-9f50-4695-9df8-62a8ea4d0c47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92b52f60d3f31e0bda0491ae71bf86299634026b247f8d0bb9538454202a269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372b92334de1d79550c784e17760cee4bce822c60f0884f2cf6bc072127515e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4095e4c3813c6c5852c8b06fe0f56e2c1901e3650ef35c6ca381b1b6dc0fa02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://882fe63b7f90f8a94e031791baafce3bc745d93140e2f0a67f546930a1349bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f20c7066e53b94b13dc8e2e0a7812836daad5c55c4542083415cacb2142072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86acb874b2333e5fa2a42dfd1c1a5ad20d9b891f53ec1a719baa5c038802bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c04a7aabbf23b2e551c6bd67779157d4e331c94d7e46d3dd578a9596a85371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d4f3cfc75cf34d522d381c61113b1f71a10e8c59cbbb7c40665364deef2c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.939026 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3adc8977-1d4d-440e-87b0-b2b24960379d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8bdcd804d7e11711a92bf7ca5b6bbc9142a008dedf64bc6416ac617af2f71a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9edc24d8e0301ed6c7d3a8038ccfbe7a5c4326b3f2b2261ddae5132dc20f7c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74239fc60a62c0d3dcac22f633f850f4469b92a70b9a3f6ac8ee22a9fd4edf53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.965476 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T08:42:56Z\\\",\\\"message\\\":\\\"d syncing service metrics on namespace openshift-etcd-operator for network=default : 2.211265ms\\\\nI0204 08:42:56.512851 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0204 08:42:56.512752 6650 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0204 08:42:56.512737 6650 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ckvx5 in node crc\\\\nI0204 08:42:56.512866 6650 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0204 08:42:56.512872 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-ckvx5 after 0 failed attempt(s)\\\\nI0204 08:42:56.512878 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-ckvx5\\\\nI0204 08:42:56.512881 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 08:42:56.512927 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T08:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b6cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ksbcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.978162 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2da145d4-49d5-4b6f-b177-2d900eb63147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcf61b7a44572cb1d42695336c9b0528a0136dc05e64f9fdda7e19d84557aa93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjbd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:10 crc kubenswrapper[4644]: I0204 08:43:10.991746 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b32206ea-5df9-4a4f-a9b1-a9bfa34c4b6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c7765c2f5970c53360e66dfb2effe9d70c81353f13e3dce27f25476c7e1f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bd8b4deeb8e8f698d9444bd39319835511d500fbb7caa906422548caac2d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf44d83f03a7f3227f0d983fbd616e7edec4d02a27c434c9ffd4efe8349e0e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b7d99bc49797be9d25ba54c67e8ea5d14b062345641eb3ce11eae1dcd9de9c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T08:41:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T08:41:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:10Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.005065 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.017992 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.018042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.018060 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.018080 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.018094 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.018370 4644 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T08:41:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c293c86d08a10c71fbf6bf732aecb9f6e730a65e29328e416c0d7e6e7a47f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T08:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.133813 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.133861 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.133878 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.133904 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.133921 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.217740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.217812 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.217823 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.217842 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.217853 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.232156 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.236478 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.236572 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.236627 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.236652 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.236705 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.259017 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.263185 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.263218 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.263228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.263243 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.263254 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.282651 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.287744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.287802 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.287829 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.287862 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.287888 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.310625 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.316908 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.317014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.317033 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.317094 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.317115 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.342112 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148072Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608872Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T08:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4deccf2e-d791-4944-9e8f-0b83ba83be33\\\",\\\"systemUUID\\\":\\\"48850853-7009-48fc-9774-1a351e978855\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T08:43:11Z is after 2025-08-24T17:21:41Z" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.342559 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.344649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.344684 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.344696 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.344714 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.344726 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.447790 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.447900 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.447924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.447954 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.447973 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.550892 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.550959 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.550980 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.551011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.551034 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.653398 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.653433 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.653442 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.653456 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.653467 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.658944 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.659049 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.659141 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.659193 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.659245 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.659438 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.659532 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.660203 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.660743 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:43:11 crc kubenswrapper[4644]: E0204 08:43:11.661114 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.684436 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:24:00.420697986 +0000 UTC Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.756719 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.756776 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.756793 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.756819 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.756836 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.859259 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.859298 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.859309 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.859324 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.859338 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.962574 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.962633 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.962650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.962674 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:11 crc kubenswrapper[4644]: I0204 08:43:11.962706 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:11Z","lastTransitionTime":"2026-02-04T08:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.066221 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.066295 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.066314 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.066373 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.066394 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.170592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.170649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.170672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.170717 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.170820 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.274454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.274838 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.275052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.275268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.275535 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.379454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.379583 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.379609 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.379638 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.379658 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.482816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.482874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.482890 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.482911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.482928 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.585759 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.585833 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.585857 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.585886 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.585908 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.685298 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:33:39.212771526 +0000 UTC Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.688224 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.688279 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.688296 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.688320 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.688362 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.791556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.791624 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.791649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.791679 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.791793 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.928747 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.928827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.928840 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.928857 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:12 crc kubenswrapper[4644]: I0204 08:43:12.928872 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:12Z","lastTransitionTime":"2026-02-04T08:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.031925 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.032029 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.032048 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.032074 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.032260 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.136898 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.136966 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.136983 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.137008 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.137027 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.239744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.239808 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.239826 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.239854 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.239874 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.343476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.343536 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.343557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.343581 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.343601 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.447118 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.447180 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.447191 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.447213 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.447232 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.549489 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.549520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.549530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.549543 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.549551 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.652772 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.652821 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.652832 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.652849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.652864 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.659560 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.659603 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.659611 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.659585 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:13 crc kubenswrapper[4644]: E0204 08:43:13.659919 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:13 crc kubenswrapper[4644]: E0204 08:43:13.660077 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:13 crc kubenswrapper[4644]: E0204 08:43:13.660226 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:13 crc kubenswrapper[4644]: E0204 08:43:13.660408 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.686309 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:22:50.717422555 +0000 UTC Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.755149 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.755203 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.755214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.755228 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.755237 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.858564 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.858620 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.858637 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.858664 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.858683 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.960917 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.960995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.961014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.961534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:13 crc kubenswrapper[4644]: I0204 08:43:13.961592 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:13Z","lastTransitionTime":"2026-02-04T08:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.063806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.063890 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.063905 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.063924 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.063936 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.166518 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.166627 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.166649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.166701 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.166720 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.269956 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.270011 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.270032 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.270057 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.270074 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.374006 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.374065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.374078 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.374100 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.374113 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.477229 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.477330 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.477356 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.477381 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.477394 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.580877 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.580951 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.580961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.580984 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.580996 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.684210 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.684307 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.684659 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.685077 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.685146 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.687527 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:31:37.577578855 +0000 UTC Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.788461 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.788853 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.788883 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.788965 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.788997 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.894562 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.894997 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.895145 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.895375 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.895621 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:14Z","lastTransitionTime":"2026-02-04T08:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:14 crc kubenswrapper[4644]: I0204 08:43:14.999557 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.000052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.000211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.000401 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.000544 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.104174 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.104246 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.104268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.104297 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.104318 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.208242 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.208608 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.208650 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.208681 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.208706 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.311249 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.311316 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.311362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.311388 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.311406 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.415515 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.415578 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.415628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.415653 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.415668 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.519666 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.519713 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.519729 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.519750 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.519761 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.623708 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.623771 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.623788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.623816 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.623834 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.659214 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.659732 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.659751 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.659901 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:15 crc kubenswrapper[4644]: E0204 08:43:15.660099 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:15 crc kubenswrapper[4644]: E0204 08:43:15.660186 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:15 crc kubenswrapper[4644]: E0204 08:43:15.660288 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:15 crc kubenswrapper[4644]: E0204 08:43:15.660434 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.688055 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:06:12.658474606 +0000 UTC Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.726834 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.726909 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.726934 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.726961 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.726979 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.829874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.830464 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.830481 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.830507 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.830519 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.933124 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.933194 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.933214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.933240 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:15 crc kubenswrapper[4644]: I0204 08:43:15.933262 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:15Z","lastTransitionTime":"2026-02-04T08:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.037811 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.037878 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.037901 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.037931 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.037953 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.140613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.140668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.140684 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.140706 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.140722 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.243788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.243861 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.243873 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.243920 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.243932 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.347645 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.347745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.347764 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.347792 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.347812 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.450932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.450995 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.451020 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.451050 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.451083 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.554184 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.554259 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.554277 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.554379 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.554400 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.656523 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.656570 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.656586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.656605 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.656619 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.688630 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:13:41.568354466 +0000 UTC Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.759165 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.759266 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.759292 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.759321 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.759393 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.862556 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.862632 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.862649 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.862942 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.862991 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.966035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.966094 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.966116 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.966141 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:16 crc kubenswrapper[4644]: I0204 08:43:16.966161 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:16Z","lastTransitionTime":"2026-02-04T08:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.069160 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.069211 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.069227 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.069248 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.069264 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.171304 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.171415 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.171439 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.171466 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.171487 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.274687 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.274726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.274737 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.274751 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.274760 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.378181 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.378245 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.378268 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.378299 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.378320 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.480658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.480699 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.480708 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.480723 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.480733 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.583690 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.583745 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.583759 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.583779 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.583792 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.659131 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:17 crc kubenswrapper[4644]: E0204 08:43:17.659318 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.659591 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:17 crc kubenswrapper[4644]: E0204 08:43:17.659734 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.659793 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.659970 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:17 crc kubenswrapper[4644]: E0204 08:43:17.660043 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:17 crc kubenswrapper[4644]: E0204 08:43:17.660156 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688301 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688350 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688362 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688376 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688386 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.688844 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:44:09.721483629 +0000 UTC Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.790561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.790612 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.790629 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.790651 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.790667 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.893514 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.893586 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.893613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.893722 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:17 crc kubenswrapper[4644]: I0204 08:43:17.893767 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:17Z","lastTransitionTime":"2026-02-04T08:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.007214 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.007271 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.007288 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.007312 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.007361 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.110874 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.110953 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.110975 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.111001 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.111018 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.214528 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.214607 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.214628 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.214652 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.214666 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.317729 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.318370 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.318403 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.318484 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.318511 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.421658 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.421732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.421756 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.421781 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.421798 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.524668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.524744 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.524768 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.524800 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.524821 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.628613 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.628736 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.628758 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.628781 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.628798 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.689372 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:39:54.7851308 +0000 UTC Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.732697 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.732726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.732735 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.732747 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.732755 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.835688 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.835722 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.835733 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.835748 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.835759 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.938732 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.938788 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.938806 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.938835 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:18 crc kubenswrapper[4644]: I0204 08:43:18.938857 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:18Z","lastTransitionTime":"2026-02-04T08:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.042452 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.042512 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.042529 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.042558 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.042574 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.145092 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.145139 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.145150 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.145166 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.145179 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.247524 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.247568 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.247577 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.247592 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.247619 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.350567 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.350639 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.350659 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.350686 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.350705 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.454365 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.454437 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.454465 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.454497 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.454521 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.557936 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.558000 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.558018 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.558042 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.558059 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.659053 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:19 crc kubenswrapper[4644]: E0204 08:43:19.659280 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.659349 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.659316 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:19 crc kubenswrapper[4644]: E0204 08:43:19.659444 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.659498 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:19 crc kubenswrapper[4644]: E0204 08:43:19.660112 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:19 crc kubenswrapper[4644]: E0204 08:43:19.660360 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.660668 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.660726 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.660740 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.660753 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.660761 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.690367 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:17:42.444717853 +0000 UTC Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.763072 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.763134 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.763152 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.763176 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.763193 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.865145 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.865196 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.865206 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.865219 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.865227 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.968480 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.968521 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.968530 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.968547 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:19 crc kubenswrapper[4644]: I0204 08:43:19.968558 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:19Z","lastTransitionTime":"2026-02-04T08:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.071642 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.071697 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.071709 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.071724 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.071735 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.174963 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.175035 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.175056 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.175086 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.175109 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.278827 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.278911 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.278929 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.278953 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.278969 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.382808 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.382867 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.382883 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.382910 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.382930 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.486411 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.486476 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.486494 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.486520 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.486537 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.590022 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.590085 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.590105 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.590129 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.590145 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.692430 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:44:15.172294852 +0000 UTC Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.694898 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.694960 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.694985 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.695017 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.695038 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.762121 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ckvx5" podStartSLOduration=89.762093229 podStartE2EDuration="1m29.762093229s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.761677177 +0000 UTC m=+110.801734982" watchObservedRunningTime="2026-02-04 08:43:20.762093229 +0000 UTC m=+110.802151014" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.781697 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.781677633 podStartE2EDuration="1m0.781677633s" podCreationTimestamp="2026-02-04 08:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.781479567 +0000 UTC m=+110.821537412" watchObservedRunningTime="2026-02-04 08:43:20.781677633 +0000 UTC m=+110.821735398" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.797014 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.797052 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.797065 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.797084 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.797097 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.819149 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mszlj" podStartSLOduration=89.81912494 podStartE2EDuration="1m29.81912494s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.817894803 +0000 UTC m=+110.857952568" watchObservedRunningTime="2026-02-04 08:43:20.81912494 +0000 UTC m=+110.859182735" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.832843 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podStartSLOduration=89.832815981 podStartE2EDuration="1m29.832815981s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.83179829 +0000 UTC m=+110.871856065" watchObservedRunningTime="2026-02-04 08:43:20.832815981 +0000 UTC m=+110.872873776" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.845973 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hlsjv" podStartSLOduration=89.845949955 podStartE2EDuration="1m29.845949955s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.84474489 +0000 UTC m=+110.884802685" watchObservedRunningTime="2026-02-04 08:43:20.845949955 +0000 UTC m=+110.886007730" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.878885 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n6jk7" podStartSLOduration=89.878865889 podStartE2EDuration="1m29.878865889s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.866843497 +0000 UTC m=+110.906901282" watchObservedRunningTime="2026-02-04 08:43:20.878865889 +0000 UTC m=+110.918923654" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.899454 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.899516 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.899534 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.899559 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.899577 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:20Z","lastTransitionTime":"2026-02-04T08:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.905777 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.905757347 podStartE2EDuration="1m31.905757347s" podCreationTimestamp="2026-02-04 08:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.905248011 +0000 UTC m=+110.945305816" watchObservedRunningTime="2026-02-04 08:43:20.905757347 +0000 UTC m=+110.945815112" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.956594 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x9rsd" podStartSLOduration=89.956572074 podStartE2EDuration="1m29.956572074s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.956549574 +0000 UTC m=+110.996607339" watchObservedRunningTime="2026-02-04 08:43:20.956572074 +0000 UTC m=+110.996629849" Feb 04 08:43:20 crc kubenswrapper[4644]: I0204 08:43:20.994457 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.994431582 podStartE2EDuration="1m28.994431582s" podCreationTimestamp="2026-02-04 08:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:20.977508937 +0000 UTC m=+111.017566692" watchObservedRunningTime="2026-02-04 08:43:20.994431582 +0000 UTC m=+111.034489357" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.001474 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.001509 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.001517 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.001532 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.001541 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.030817 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.030791467 podStartE2EDuration="39.030791467s" podCreationTimestamp="2026-02-04 08:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:21.029095228 +0000 UTC m=+111.069152983" watchObservedRunningTime="2026-02-04 08:43:21.030791467 +0000 UTC m=+111.070849242" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.054147 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.054125361 podStartE2EDuration="1m27.054125361s" podCreationTimestamp="2026-02-04 08:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:21.051527665 +0000 UTC m=+111.091585420" watchObservedRunningTime="2026-02-04 08:43:21.054125361 +0000 UTC m=+111.094183126" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.103560 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.103590 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.103598 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.103611 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.103620 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.206237 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.206280 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.206288 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.206303 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.206315 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.308849 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.308921 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.308932 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.308946 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.308956 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.404010 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.404054 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.404094 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.404111 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.404122 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.426561 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.426622 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.426643 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.426672 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.426698 4644 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T08:43:21Z","lastTransitionTime":"2026-02-04T08:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.471842 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd"] Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.472415 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.474162 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.474717 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.474828 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.475942 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.544914 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.544949 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee1153-7315-4d00-a775-329b95f8bf89-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.544966 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0ee1153-7315-4d00-a775-329b95f8bf89-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.544997 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee1153-7315-4d00-a775-329b95f8bf89-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.545017 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645709 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee1153-7315-4d00-a775-329b95f8bf89-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645761 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645843 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645871 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee1153-7315-4d00-a775-329b95f8bf89-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645897 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0ee1153-7315-4d00-a775-329b95f8bf89-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.645961 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.646029 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a0ee1153-7315-4d00-a775-329b95f8bf89-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.647690 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee1153-7315-4d00-a775-329b95f8bf89-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.659577 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee1153-7315-4d00-a775-329b95f8bf89-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.659782 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.659885 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:21 crc kubenswrapper[4644]: E0204 08:43:21.660072 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.660111 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.660147 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:21 crc kubenswrapper[4644]: E0204 08:43:21.660202 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:21 crc kubenswrapper[4644]: E0204 08:43:21.660247 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:21 crc kubenswrapper[4644]: E0204 08:43:21.660300 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.679131 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0ee1153-7315-4d00-a775-329b95f8bf89-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ms5xd\" (UID: \"a0ee1153-7315-4d00-a775-329b95f8bf89\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.693719 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:13:54.995221673 +0000 UTC Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.693835 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.704374 4644 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 04 08:43:21 crc kubenswrapper[4644]: I0204 08:43:21.792591 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" Feb 04 08:43:22 crc kubenswrapper[4644]: I0204 08:43:22.337655 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" event={"ID":"a0ee1153-7315-4d00-a775-329b95f8bf89","Type":"ContainerStarted","Data":"40d057f208cd3f3b8b16cefcd7fd84db00f31280b161348962f15a20b90300ee"} Feb 04 08:43:22 crc kubenswrapper[4644]: I0204 08:43:22.337722 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" event={"ID":"a0ee1153-7315-4d00-a775-329b95f8bf89","Type":"ContainerStarted","Data":"0457e10fbcf4c3b77479620cc76d642345835dfcb1eed73ba2744b69d8913dfd"} Feb 04 08:43:22 crc kubenswrapper[4644]: I0204 08:43:22.359775 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ms5xd" podStartSLOduration=91.359747879 podStartE2EDuration="1m31.359747879s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:22.35913943 +0000 UTC m=+112.399197275" watchObservedRunningTime="2026-02-04 08:43:22.359747879 +0000 UTC m=+112.399805674" Feb 04 08:43:23 crc kubenswrapper[4644]: I0204 08:43:23.659502 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:23 crc kubenswrapper[4644]: I0204 08:43:23.659625 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:23 crc kubenswrapper[4644]: I0204 08:43:23.659693 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:23 crc kubenswrapper[4644]: E0204 08:43:23.659905 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:23 crc kubenswrapper[4644]: I0204 08:43:23.660298 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:23 crc kubenswrapper[4644]: E0204 08:43:23.661146 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:23 crc kubenswrapper[4644]: E0204 08:43:23.661313 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:23 crc kubenswrapper[4644]: E0204 08:43:23.661600 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:25 crc kubenswrapper[4644]: I0204 08:43:25.659449 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:25 crc kubenswrapper[4644]: I0204 08:43:25.659485 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:25 crc kubenswrapper[4644]: I0204 08:43:25.659534 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:25 crc kubenswrapper[4644]: I0204 08:43:25.659566 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:25 crc kubenswrapper[4644]: E0204 08:43:25.660396 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:25 crc kubenswrapper[4644]: E0204 08:43:25.660555 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:25 crc kubenswrapper[4644]: E0204 08:43:25.660645 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:25 crc kubenswrapper[4644]: E0204 08:43:25.660706 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:25 crc kubenswrapper[4644]: I0204 08:43:25.661089 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:43:25 crc kubenswrapper[4644]: E0204 08:43:25.661504 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ksbcg_openshift-ovn-kubernetes(98b7bb4a-12ca-4851-bf5a-49d38465ec0d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.354169 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/1.log" Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.354684 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/0.log" Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.354725 4644 generic.go:334] "Generic (PLEG): container finished" podID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" containerID="842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077" exitCode=1 Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.354750 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerDied","Data":"842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077"} Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.354778 4644 scope.go:117] "RemoveContainer" containerID="3a1a6c224e5fa82ecb367afa19b87b2cf0e30a0fbc12f1bc8f4b78203a40c2b5" Feb 04 08:43:26 crc kubenswrapper[4644]: I0204 08:43:26.355525 4644 scope.go:117] "RemoveContainer" containerID="842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077" Feb 04 08:43:26 crc kubenswrapper[4644]: E0204 08:43:26.355847 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mszlj_openshift-multus(7aa20f1c-0ad7-449e-a179-e246a52dfb2a)\"" pod="openshift-multus/multus-mszlj" podUID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" Feb 04 08:43:27 crc kubenswrapper[4644]: I0204 08:43:27.361220 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/1.log" Feb 04 08:43:27 crc kubenswrapper[4644]: I0204 08:43:27.659510 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:27 crc kubenswrapper[4644]: I0204 08:43:27.659534 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:27 crc kubenswrapper[4644]: E0204 08:43:27.660110 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:27 crc kubenswrapper[4644]: I0204 08:43:27.659602 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:27 crc kubenswrapper[4644]: I0204 08:43:27.659546 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:27 crc kubenswrapper[4644]: E0204 08:43:27.660194 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:27 crc kubenswrapper[4644]: E0204 08:43:27.660433 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:27 crc kubenswrapper[4644]: E0204 08:43:27.660720 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:29 crc kubenswrapper[4644]: I0204 08:43:29.659147 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:29 crc kubenswrapper[4644]: I0204 08:43:29.659226 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:29 crc kubenswrapper[4644]: I0204 08:43:29.659192 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:29 crc kubenswrapper[4644]: I0204 08:43:29.659149 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:29 crc kubenswrapper[4644]: E0204 08:43:29.659360 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:29 crc kubenswrapper[4644]: E0204 08:43:29.659518 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:29 crc kubenswrapper[4644]: E0204 08:43:29.659677 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:29 crc kubenswrapper[4644]: E0204 08:43:29.659860 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:30 crc kubenswrapper[4644]: E0204 08:43:30.623622 4644 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 04 08:43:30 crc kubenswrapper[4644]: E0204 08:43:30.748460 4644 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 08:43:31 crc kubenswrapper[4644]: I0204 08:43:31.658896 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:31 crc kubenswrapper[4644]: I0204 08:43:31.658938 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:31 crc kubenswrapper[4644]: I0204 08:43:31.658938 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:31 crc kubenswrapper[4644]: I0204 08:43:31.658998 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:31 crc kubenswrapper[4644]: E0204 08:43:31.659129 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:31 crc kubenswrapper[4644]: E0204 08:43:31.659292 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:31 crc kubenswrapper[4644]: E0204 08:43:31.659471 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:31 crc kubenswrapper[4644]: E0204 08:43:31.659640 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:33 crc kubenswrapper[4644]: I0204 08:43:33.659899 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:33 crc kubenswrapper[4644]: I0204 08:43:33.659991 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:33 crc kubenswrapper[4644]: I0204 08:43:33.659947 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:33 crc kubenswrapper[4644]: E0204 08:43:33.660173 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:33 crc kubenswrapper[4644]: E0204 08:43:33.660309 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:33 crc kubenswrapper[4644]: I0204 08:43:33.660408 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:33 crc kubenswrapper[4644]: E0204 08:43:33.660498 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:33 crc kubenswrapper[4644]: E0204 08:43:33.660582 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:35 crc kubenswrapper[4644]: I0204 08:43:35.659471 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:35 crc kubenswrapper[4644]: I0204 08:43:35.659464 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:35 crc kubenswrapper[4644]: E0204 08:43:35.659601 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:35 crc kubenswrapper[4644]: I0204 08:43:35.659483 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:35 crc kubenswrapper[4644]: I0204 08:43:35.659757 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:35 crc kubenswrapper[4644]: E0204 08:43:35.659767 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:35 crc kubenswrapper[4644]: E0204 08:43:35.659857 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:35 crc kubenswrapper[4644]: E0204 08:43:35.659970 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:35 crc kubenswrapper[4644]: E0204 08:43:35.750693 4644 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 08:43:37 crc kubenswrapper[4644]: I0204 08:43:37.659388 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:37 crc kubenswrapper[4644]: I0204 08:43:37.659484 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:37 crc kubenswrapper[4644]: E0204 08:43:37.659557 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:37 crc kubenswrapper[4644]: I0204 08:43:37.659485 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:37 crc kubenswrapper[4644]: E0204 08:43:37.659844 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:37 crc kubenswrapper[4644]: E0204 08:43:37.660035 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:37 crc kubenswrapper[4644]: I0204 08:43:37.660185 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:37 crc kubenswrapper[4644]: E0204 08:43:37.660282 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:38 crc kubenswrapper[4644]: I0204 08:43:38.659862 4644 scope.go:117] "RemoveContainer" containerID="842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.412149 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/1.log" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.412602 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerStarted","Data":"a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4"} Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.659804 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:39 crc kubenswrapper[4644]: E0204 08:43:39.659978 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.660284 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:39 crc kubenswrapper[4644]: E0204 08:43:39.660415 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.660624 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:39 crc kubenswrapper[4644]: E0204 08:43:39.660719 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.660631 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:39 crc kubenswrapper[4644]: E0204 08:43:39.660898 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:39 crc kubenswrapper[4644]: I0204 08:43:39.662121 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.417109 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/3.log" Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.419443 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerStarted","Data":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.419950 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.449538 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podStartSLOduration=109.449509995 podStartE2EDuration="1m49.449509995s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:40.448400833 +0000 UTC m=+130.488458608" watchObservedRunningTime="2026-02-04 08:43:40.449509995 +0000 UTC m=+130.489567770" Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.635885 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f6ghp"] Feb 04 08:43:40 crc kubenswrapper[4644]: I0204 08:43:40.636020 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:40 crc kubenswrapper[4644]: E0204 08:43:40.636162 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:40 crc kubenswrapper[4644]: E0204 08:43:40.751707 4644 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 08:43:41 crc kubenswrapper[4644]: I0204 08:43:41.659174 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:41 crc kubenswrapper[4644]: I0204 08:43:41.659229 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:41 crc kubenswrapper[4644]: E0204 08:43:41.659728 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:41 crc kubenswrapper[4644]: E0204 08:43:41.659905 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:41 crc kubenswrapper[4644]: I0204 08:43:41.659420 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:41 crc kubenswrapper[4644]: E0204 08:43:41.660022 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:42 crc kubenswrapper[4644]: I0204 08:43:42.659465 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:42 crc kubenswrapper[4644]: E0204 08:43:42.659699 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:43 crc kubenswrapper[4644]: I0204 08:43:43.659236 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:43 crc kubenswrapper[4644]: I0204 08:43:43.659285 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:43 crc kubenswrapper[4644]: I0204 08:43:43.659430 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:43 crc kubenswrapper[4644]: E0204 08:43:43.659431 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:43 crc kubenswrapper[4644]: E0204 08:43:43.659576 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:43 crc kubenswrapper[4644]: E0204 08:43:43.659707 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:44 crc kubenswrapper[4644]: I0204 08:43:44.659237 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:44 crc kubenswrapper[4644]: E0204 08:43:44.659830 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f6ghp" podUID="b0c747eb-fe5e-4cad-a021-307cc2ed1ad5" Feb 04 08:43:45 crc kubenswrapper[4644]: I0204 08:43:45.658694 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:45 crc kubenswrapper[4644]: I0204 08:43:45.658757 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:45 crc kubenswrapper[4644]: I0204 08:43:45.658802 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:45 crc kubenswrapper[4644]: E0204 08:43:45.658885 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 08:43:45 crc kubenswrapper[4644]: E0204 08:43:45.658822 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 08:43:45 crc kubenswrapper[4644]: E0204 08:43:45.659027 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 08:43:46 crc kubenswrapper[4644]: I0204 08:43:46.659201 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:43:46 crc kubenswrapper[4644]: I0204 08:43:46.662852 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 04 08:43:46 crc kubenswrapper[4644]: I0204 08:43:46.664693 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.659266 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.659397 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.659417 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.663247 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.664050 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.664369 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 04 08:43:47 crc kubenswrapper[4644]: I0204 08:43:47.664659 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.840281 4644 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.893200 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vw5x9"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.894187 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.894484 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.894677 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.895089 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.895953 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.902182 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.903101 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.910651 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.911520 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.912710 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.915996 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.916556 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.917450 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.917785 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.921945 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.922313 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.922759 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.923104 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.923263 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.923274 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.923692 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.924212 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.924317 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.924634 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.924878 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.925704 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.926867 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w4cjj"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.927517 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.928066 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.928503 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.926881 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.928945 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929085 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929161 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929232 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929375 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929506 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929538 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.929619 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.930027 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.932001 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.933835 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.936702 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937056 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937773 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937809 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937836 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rvk\" (UniqueName: \"kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937864 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8nml\" (UniqueName: \"kubernetes.io/projected/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-kube-api-access-n8nml\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937891 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937914 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-config\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937954 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhkq\" (UniqueName: \"kubernetes.io/projected/c09e24ca-d42d-4f59-9a19-83410a062bb1-kube-api-access-cmhkq\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.937977 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938020 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e892d6f0-1fd8-4a21-9023-7bb810c30396-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938054 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938077 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c09e24ca-d42d-4f59-9a19-83410a062bb1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938104 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938128 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938153 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938175 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x8k\" (UniqueName: \"kubernetes.io/projected/e892d6f0-1fd8-4a21-9023-7bb810c30396-kube-api-access-g4x8k\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938198 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvfq\" (UniqueName: \"kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938223 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.938250 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-images\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941015 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941195 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941260 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941434 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941670 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.941904 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942306 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942459 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942493 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942617 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942656 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942755 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942859 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942930 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942944 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943088 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943182 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943250 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943317 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943410 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.942622 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.943523 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.944998 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.945132 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.945220 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.980924 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.985024 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xh4t4"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.986618 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.986728 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.986751 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.987410 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.988001 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.988252 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.988427 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:51 crc kubenswrapper[4644]: I0204 08:43:51.988702 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.010383 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zcwz9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.010812 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9vpz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.011137 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.011475 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.018625 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.018745 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.018793 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.018949 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019008 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019030 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019150 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019201 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019306 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019317 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019474 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019613 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019666 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019743 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.019814 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.020320 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.020581 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.020666 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.020759 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.021774 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.021903 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.022029 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.022154 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.025715 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.026275 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.026491 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.028872 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.033612 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.034863 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tbxr8"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.035390 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.035723 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9n8df"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.036596 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.039886 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.039928 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvfq\" (UniqueName: \"kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.039950 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.039972 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-images\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.039992 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040014 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-encryption-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040032 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040048 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040064 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040088 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040109 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040126 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040141 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040173 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040190 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7rvk\" (UniqueName: \"kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040209 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-auth-proxy-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040225 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8beb277d-de0d-4076-b16c-8589f849f8de-machine-approver-tls\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040245 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040263 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040281 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6lw\" (UniqueName: \"kubernetes.io/projected/71118688-df29-464b-a113-93f582f8ac6f-kube-api-access-8w6lw\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040297 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040316 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8nml\" (UniqueName: \"kubernetes.io/projected/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-kube-api-access-n8nml\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040354 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040371 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-config\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040388 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-client\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040403 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040444 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040460 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040475 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9babe17-48df-46b7-9d27-a6698abfa7e7-serving-cert\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040490 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbcc\" (UniqueName: \"kubernetes.io/projected/8beb277d-de0d-4076-b16c-8589f849f8de-kube-api-access-8kbcc\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040506 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpgr\" (UniqueName: \"kubernetes.io/projected/323e297c-2d63-4230-8110-c7d9c9da3538-kube-api-access-sdpgr\") pod \"downloads-7954f5f757-zcwz9\" (UID: \"323e297c-2d63-4230-8110-c7d9c9da3538\") " pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040521 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040537 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040554 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-serving-cert\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040570 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040604 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wmq\" (UniqueName: \"kubernetes.io/projected/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-kube-api-access-46wmq\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040621 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040638 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7lx\" (UniqueName: \"kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040656 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhkq\" (UniqueName: \"kubernetes.io/projected/c09e24ca-d42d-4f59-9a19-83410a062bb1-kube-api-access-cmhkq\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040674 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt25f\" (UniqueName: \"kubernetes.io/projected/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-kube-api-access-tt25f\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040690 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-audit-policies\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040724 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-config\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040740 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-encryption-config\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040759 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040777 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040792 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit-dir\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040809 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040842 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040860 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e892d6f0-1fd8-4a21-9023-7bb810c30396-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040878 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040911 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040930 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040947 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5b2k\" (UniqueName: \"kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040967 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c09e24ca-d42d-4f59-9a19-83410a062bb1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040983 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.040999 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041019 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71118688-df29-464b-a113-93f582f8ac6f-audit-dir\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041034 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-serving-cert\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041051 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041067 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-serving-cert\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041082 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041098 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-service-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041116 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041132 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-image-import-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041147 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-client\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041164 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x8k\" (UniqueName: \"kubernetes.io/projected/e892d6f0-1fd8-4a21-9023-7bb810c30396-kube-api-access-g4x8k\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041182 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-config\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041197 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqg4l\" (UniqueName: \"kubernetes.io/projected/d9babe17-48df-46b7-9d27-a6698abfa7e7-kube-api-access-bqg4l\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041212 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-etcd-client\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041235 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-node-pullsecrets\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.041254 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.042605 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.043169 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-config\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.044232 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c09e24ca-d42d-4f59-9a19-83410a062bb1-images\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.044777 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.046303 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.047490 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.049819 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.050359 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.050891 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.052433 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054219 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054530 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e892d6f0-1fd8-4a21-9023-7bb810c30396-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054622 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054674 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054709 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054793 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054843 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054882 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.054960 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.055101 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.055188 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.055641 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.056532 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xh4t4"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.056538 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.057630 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.064433 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.071756 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.072770 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.072856 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.073550 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.073768 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.077522 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.077625 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c09e24ca-d42d-4f59-9a19-83410a062bb1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.077772 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vw5x9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.078854 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.078912 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.079906 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.083093 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.088873 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvfq\" (UniqueName: \"kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq\") pod \"route-controller-manager-6576b87f9c-4jgmg\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.089494 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.097363 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhkq\" (UniqueName: \"kubernetes.io/projected/c09e24ca-d42d-4f59-9a19-83410a062bb1-kube-api-access-cmhkq\") pod \"machine-api-operator-5694c8668f-vw5x9\" (UID: \"c09e24ca-d42d-4f59-9a19-83410a062bb1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.099775 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.099893 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.100351 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.102406 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8nml\" (UniqueName: \"kubernetes.io/projected/b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13-kube-api-access-n8nml\") pod \"openshift-apiserver-operator-796bbdcf4f-cc6f4\" (UID: \"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.108706 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.109270 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.109706 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.109798 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.110089 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.113638 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.114532 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.114976 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.116014 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.118937 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.119292 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.119720 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.120453 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.122858 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.123650 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.123998 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zjdj8"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.124486 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.126512 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.132363 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.132910 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.134858 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-spmn6"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.135597 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.136695 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.138555 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.138689 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.138690 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.140715 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.140961 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.141073 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gbncm"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.141169 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.141440 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142138 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-serving-cert\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142160 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142178 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-service-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142197 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5739acd-dba1-466b-9397-fba070e97c71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-image-import-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142239 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142263 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-config\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142278 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqg4l\" (UniqueName: \"kubernetes.io/projected/d9babe17-48df-46b7-9d27-a6698abfa7e7-kube-api-access-bqg4l\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142295 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-etcd-client\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142309 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-client\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142343 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxqr\" (UniqueName: \"kubernetes.io/projected/1380462d-7e7c-4c20-859c-4132b703369e-kube-api-access-vzxqr\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142360 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-node-pullsecrets\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142375 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142393 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142410 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142447 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-encryption-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142465 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142482 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142499 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142535 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142552 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142570 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1380462d-7e7c-4c20-859c-4132b703369e-serving-cert\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142590 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142606 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142624 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5739acd-dba1-466b-9397-fba070e97c71-config\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142659 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-auth-proxy-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142675 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sz7j\" (UniqueName: \"kubernetes.io/projected/2e271786-29bb-4576-b75a-23568a8b8ae0-kube-api-access-9sz7j\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142693 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142712 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142732 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6lw\" (UniqueName: \"kubernetes.io/projected/71118688-df29-464b-a113-93f582f8ac6f-kube-api-access-8w6lw\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142749 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142769 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8beb277d-de0d-4076-b16c-8589f849f8de-machine-approver-tls\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.142798 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-client\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.143554 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-serving-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.143619 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.145551 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.146535 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.146517 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.147606 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.147700 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.147796 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.147871 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9babe17-48df-46b7-9d27-a6698abfa7e7-serving-cert\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.147950 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbcc\" (UniqueName: \"kubernetes.io/projected/8beb277d-de0d-4076-b16c-8589f849f8de-kube-api-access-8kbcc\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148040 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpgr\" (UniqueName: \"kubernetes.io/projected/323e297c-2d63-4230-8110-c7d9c9da3538-kube-api-access-sdpgr\") pod \"downloads-7954f5f757-zcwz9\" (UID: \"323e297c-2d63-4230-8110-c7d9c9da3538\") " pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148118 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148185 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148356 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-serving-cert\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148434 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148513 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148628 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46wmq\" (UniqueName: \"kubernetes.io/projected/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-kube-api-access-46wmq\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148707 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148776 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7lx\" (UniqueName: \"kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148854 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1380462d-7e7c-4c20-859c-4132b703369e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.148939 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt25f\" (UniqueName: \"kubernetes.io/projected/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-kube-api-access-tt25f\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149010 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-audit-policies\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149085 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149155 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-config\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149246 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-encryption-config\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149354 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149441 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5739acd-dba1-466b-9397-fba070e97c71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149521 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149614 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149688 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit-dir\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149775 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.149874 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150308 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e271786-29bb-4576-b75a-23568a8b8ae0-metrics-tls\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150376 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5b2k\" (UniqueName: \"kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150419 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71118688-df29-464b-a113-93f582f8ac6f-audit-dir\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150441 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-serving-cert\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150463 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ftvh\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-kube-api-access-9ftvh\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150488 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.150604 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.151192 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.153788 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71118688-df29-464b-a113-93f582f8ac6f-audit-dir\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.154243 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.154800 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-service-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.156242 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-image-import-ca\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.158485 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.158962 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.159085 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.159461 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-config\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.160392 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.161728 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8beb277d-de0d-4076-b16c-8589f849f8de-auth-proxy-config\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.161794 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.162823 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.163750 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-audit-policies\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.164807 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.165353 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-ca\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.165550 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.168641 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.169068 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-config\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.170532 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-node-pullsecrets\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.171214 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9babe17-48df-46b7-9d27-a6698abfa7e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.171308 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71118688-df29-464b-a113-93f582f8ac6f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.171909 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.172753 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-serving-cert\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.173144 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.175412 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.175580 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-audit-dir\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.175723 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.175724 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.176245 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-etcd-client\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.176268 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.177472 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bgkbq"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.178011 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.178832 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.178946 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.179232 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6qfw4"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.179657 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.176348 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.180277 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.180668 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.183612 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.183664 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.183697 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.184397 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7z7rd"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.182966 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-encryption-config\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.184739 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.184970 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.184997 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.185042 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.185341 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.191837 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-etcd-client\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.192978 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.193911 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7rvk\" (UniqueName: \"kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk\") pod \"controller-manager-879f6c89f-cghxg\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.194703 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-serving-cert\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.199348 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x8k\" (UniqueName: \"kubernetes.io/projected/e892d6f0-1fd8-4a21-9023-7bb810c30396-kube-api-access-g4x8k\") pod \"cluster-samples-operator-665b6dd947-b897n\" (UID: \"e892d6f0-1fd8-4a21-9023-7bb810c30396\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.199726 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.199834 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.200472 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zcwz9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.200528 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.200543 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.201051 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8beb277d-de0d-4076-b16c-8589f849f8de-machine-approver-tls\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.201883 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-encryption-config\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.202201 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9babe17-48df-46b7-9d27-a6698abfa7e7-serving-cert\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.202611 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-etcd-client\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.204896 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.210199 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w4cjj"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.211026 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.212743 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71118688-df29-464b-a113-93f582f8ac6f-serving-cert\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.217046 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.218037 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tbxr8"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.220270 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.222666 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9vpz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.222661 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.226050 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h6l9p"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.229030 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.229160 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.232436 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.232475 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gbncm"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.233545 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.235783 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.236894 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.237714 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.237986 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.239379 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.240492 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6qfw4"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.243935 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.243963 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.244365 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.246237 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-spmn6"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.248567 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252054 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1380462d-7e7c-4c20-859c-4132b703369e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252094 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5739acd-dba1-466b-9397-fba070e97c71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252142 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e271786-29bb-4576-b75a-23568a8b8ae0-metrics-tls\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252169 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ftvh\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-kube-api-access-9ftvh\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252192 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5739acd-dba1-466b-9397-fba070e97c71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252215 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252231 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzxqr\" (UniqueName: \"kubernetes.io/projected/1380462d-7e7c-4c20-859c-4132b703369e-kube-api-access-vzxqr\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252252 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252281 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1380462d-7e7c-4c20-859c-4132b703369e-serving-cert\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252298 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252317 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5739acd-dba1-466b-9397-fba070e97c71-config\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252359 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sz7j\" (UniqueName: \"kubernetes.io/projected/2e271786-29bb-4576-b75a-23568a8b8ae0-kube-api-access-9sz7j\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.252531 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fwfsl"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.253795 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9n8df"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.253821 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fwfsl"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.253892 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.255245 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1380462d-7e7c-4c20-859c-4132b703369e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.258809 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2e271786-29bb-4576-b75a-23568a8b8ae0-metrics-tls\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.263696 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.265600 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5739acd-dba1-466b-9397-fba070e97c71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.265700 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5739acd-dba1-466b-9397-fba070e97c71-config\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.267151 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.268908 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.269294 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h6l9p"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.270354 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.270508 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.271841 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bgkbq"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.273774 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.274926 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.276529 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.284529 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.299774 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.306738 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.325818 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.332519 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.337050 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.347653 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.367961 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.376841 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.401485 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.416848 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.436826 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.461189 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.473907 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1380462d-7e7c-4c20-859c-4132b703369e-serving-cert\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.477472 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.482299 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vw5x9"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.502736 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.510179 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.518293 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.537069 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 04 08:43:52 crc kubenswrapper[4644]: W0204 08:43:52.557034 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32e58256_f013_4739_893b_6d403836f94e.slice/crio-06b228065db205b9d5e7ae34c66e8cac9d13d622d2d2e658d22c78f27d53d7d2 WatchSource:0}: Error finding container 06b228065db205b9d5e7ae34c66e8cac9d13d622d2d2e658d22c78f27d53d7d2: Status 404 returned error can't find the container with id 06b228065db205b9d5e7ae34c66e8cac9d13d622d2d2e658d22c78f27d53d7d2 Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.559978 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.576953 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.596656 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.596898 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.616816 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.620300 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.637080 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.668145 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.677285 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: W0204 08:43:52.680450 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe892c9a_a311_4937_8c75_71fa5452379a.slice/crio-f5f92d21cadfee88cfbe7d65e42669536f6c6ada9e4d28b7a424376d16e0152f WatchSource:0}: Error finding container f5f92d21cadfee88cfbe7d65e42669536f6c6ada9e4d28b7a424376d16e0152f: Status 404 returned error can't find the container with id f5f92d21cadfee88cfbe7d65e42669536f6c6ada9e4d28b7a424376d16e0152f Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.700888 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.721805 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.738446 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.757357 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.777191 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.797364 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.816859 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.837658 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.859000 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.877045 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.897914 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.917505 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.937609 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.956505 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.977465 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 04 08:43:52 crc kubenswrapper[4644]: I0204 08:43:52.997070 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.017360 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.037507 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.058008 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.078084 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.097556 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.119424 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.137699 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.154942 4644 request.go:700] Waited for 1.018839014s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.157464 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.178592 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.197810 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.218084 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.237763 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.258632 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.277534 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.297871 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.316509 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.338488 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.359168 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.378582 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.397809 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.417177 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.461501 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5b2k\" (UniqueName: \"kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k\") pod \"oauth-openshift-558db77b4-fxwlv\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.474977 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" event={"ID":"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13","Type":"ContainerStarted","Data":"8983bb5170536aad474db2fbd23b1baa87db1775b28de5664ce273155f8295df"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.475048 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" event={"ID":"b512ad07-f332-4a8a-aa3c-e1c7bdc7cf13","Type":"ContainerStarted","Data":"8791f857f2516ce37153afc9233a6cffbc6d25cc6921dce93afc4e512593fc0d"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.477601 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" event={"ID":"c09e24ca-d42d-4f59-9a19-83410a062bb1","Type":"ContainerStarted","Data":"9c364a5b81bc59224881dadd152e664429e2542930daf0c3ea133b5b0e5a89cf"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.477633 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" event={"ID":"c09e24ca-d42d-4f59-9a19-83410a062bb1","Type":"ContainerStarted","Data":"edefa63479d6ed3427a5ea331cfb10d0625b5b5beaf2312a33b84e53ff54b9f6"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.477645 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" event={"ID":"c09e24ca-d42d-4f59-9a19-83410a062bb1","Type":"ContainerStarted","Data":"ec8dd98decbceadf41b1f70e7b3cd973f4561889e142f3931ff8cbb9ac7f3798"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.479772 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" event={"ID":"be892c9a-a311-4937-8c75-71fa5452379a","Type":"ContainerStarted","Data":"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.479824 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" event={"ID":"be892c9a-a311-4937-8c75-71fa5452379a","Type":"ContainerStarted","Data":"f5f92d21cadfee88cfbe7d65e42669536f6c6ada9e4d28b7a424376d16e0152f"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.480090 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.481474 4644 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cghxg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.481533 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" podUID="be892c9a-a311-4937-8c75-71fa5452379a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.481778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" event={"ID":"e892d6f0-1fd8-4a21-9023-7bb810c30396","Type":"ContainerStarted","Data":"75387677715deb1821680072c9e96d42a8ecf51dcf90053d94ef27e7b72a3616"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.481805 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" event={"ID":"e892d6f0-1fd8-4a21-9023-7bb810c30396","Type":"ContainerStarted","Data":"4415ad9cd61912f0ffba85ad3c62b5f93ad40153075a64cf7ac7d15a113e149c"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.481818 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" event={"ID":"e892d6f0-1fd8-4a21-9023-7bb810c30396","Type":"ContainerStarted","Data":"1e61d8e3c49124a2c83c0a144e63e4dc3837183595da10ff24708f642fd0adff"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.483300 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" event={"ID":"32e58256-f013-4739-893b-6d403836f94e","Type":"ContainerStarted","Data":"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.483392 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" event={"ID":"32e58256-f013-4739-893b-6d403836f94e","Type":"ContainerStarted","Data":"06b228065db205b9d5e7ae34c66e8cac9d13d622d2d2e658d22c78f27d53d7d2"} Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.483413 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.488398 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbcc\" (UniqueName: \"kubernetes.io/projected/8beb277d-de0d-4076-b16c-8589f849f8de-kube-api-access-8kbcc\") pod \"machine-approver-56656f9798-z6cqz\" (UID: \"8beb277d-de0d-4076-b16c-8589f849f8de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.511688 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpgr\" (UniqueName: \"kubernetes.io/projected/323e297c-2d63-4230-8110-c7d9c9da3538-kube-api-access-sdpgr\") pod \"downloads-7954f5f757-zcwz9\" (UID: \"323e297c-2d63-4230-8110-c7d9c9da3538\") " pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.522343 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.544629 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqg4l\" (UniqueName: \"kubernetes.io/projected/d9babe17-48df-46b7-9d27-a6698abfa7e7-kube-api-access-bqg4l\") pod \"authentication-operator-69f744f599-xh4t4\" (UID: \"d9babe17-48df-46b7-9d27-a6698abfa7e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.560285 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6lw\" (UniqueName: \"kubernetes.io/projected/71118688-df29-464b-a113-93f582f8ac6f-kube-api-access-8w6lw\") pod \"apiserver-7bbb656c7d-rb5ld\" (UID: \"71118688-df29-464b-a113-93f582f8ac6f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.579111 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7lx\" (UniqueName: \"kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx\") pod \"console-f9d7485db-2mwnq\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.599345 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt25f\" (UniqueName: \"kubernetes.io/projected/e0ab8544-eee1-4ece-aecc-09ae3a228c3c-kube-api-access-tt25f\") pod \"apiserver-76f77b778f-w4cjj\" (UID: \"e0ab8544-eee1-4ece-aecc-09ae3a228c3c\") " pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.609477 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.617295 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46wmq\" (UniqueName: \"kubernetes.io/projected/d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16-kube-api-access-46wmq\") pod \"etcd-operator-b45778765-s9vpz\" (UID: \"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.619698 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.625891 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.635488 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.636963 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.650419 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.678628 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.679784 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.701243 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.706747 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.719166 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.725702 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.741539 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.757383 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.777364 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.802811 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.819016 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.841374 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.853852 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.857281 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.867594 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.879984 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.901292 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.916814 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.928528 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w4cjj"] Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.950037 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.968903 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.978356 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 04 08:43:53 crc kubenswrapper[4644]: I0204 08:43:53.993993 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xh4t4"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.004151 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.014147 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.020974 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.057836 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.069057 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.081632 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.099149 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.117210 4644 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.136633 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.159847 4644 request.go:700] Waited for 1.930101996s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.165868 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.218060 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sz7j\" (UniqueName: \"kubernetes.io/projected/2e271786-29bb-4576-b75a-23568a8b8ae0-kube-api-access-9sz7j\") pod \"dns-operator-744455d44c-9n8df\" (UID: \"2e271786-29bb-4576-b75a-23568a8b8ae0\") " pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.226830 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5739acd-dba1-466b-9397-fba070e97c71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-47859\" (UID: \"b5739acd-dba1-466b-9397-fba070e97c71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.234465 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zcwz9"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.250817 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ftvh\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-kube-api-access-9ftvh\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.267464 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.272555 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c83f1fe0-8bb7-4938-849c-8c199a9fef3b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5jdz9\" (UID: \"c83f1fe0-8bb7-4938-849c-8c199a9fef3b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.280816 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.284081 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.295172 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9vpz"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.297143 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 04 08:43:54 crc kubenswrapper[4644]: W0204 08:43:54.318574 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32778c2_ba6d_4b8b_a66f_8b86fc1d0e16.slice/crio-8241f0f95583bd4a0b29d719ffeee8c24228300fb0175e5bf6c6388a3079657d WatchSource:0}: Error finding container 8241f0f95583bd4a0b29d719ffeee8c24228300fb0175e5bf6c6388a3079657d: Status 404 returned error can't find the container with id 8241f0f95583bd4a0b29d719ffeee8c24228300fb0175e5bf6c6388a3079657d Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.332040 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld"] Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.335455 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzxqr\" (UniqueName: \"kubernetes.io/projected/1380462d-7e7c-4c20-859c-4132b703369e-kube-api-access-vzxqr\") pod \"openshift-config-operator-7777fb866f-l6nqv\" (UID: \"1380462d-7e7c-4c20-859c-4132b703369e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.336056 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.351044 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" Feb 04 08:43:54 crc kubenswrapper[4644]: W0204 08:43:54.355439 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71118688_df29_464b_a113_93f582f8ac6f.slice/crio-1008ac31242f4b817c87d6796eb2f4535421a27d6b40e67fa0a830edaf403d89 WatchSource:0}: Error finding container 1008ac31242f4b817c87d6796eb2f4535421a27d6b40e67fa0a830edaf403d89: Status 404 returned error can't find the container with id 1008ac31242f4b817c87d6796eb2f4535421a27d6b40e67fa0a830edaf403d89 Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.366863 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.381343 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.390428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.390465 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.391287 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjm5\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392479 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392630 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30482ca0-e995-45ad-9f4b-8bb30c6a044f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392698 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef7b53b1-edd2-4468-88e9-7f9764aed161-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392728 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f846627e-2b5c-4fed-8898-e734c9dbce9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392785 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4dk\" (UniqueName: \"kubernetes.io/projected/30482ca0-e995-45ad-9f4b-8bb30c6a044f-kube-api-access-ww4dk\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.392916 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/751e1c50-72f6-4e71-a24c-58072b84ee39-trusted-ca\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.393124 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.393186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e1c50-72f6-4e71-a24c-58072b84ee39-metrics-tls\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.393611 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30482ca0-e995-45ad-9f4b-8bb30c6a044f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.393631 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:54.893605925 +0000 UTC m=+144.933663680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.393890 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394249 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-config\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394294 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394412 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtmr\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-kube-api-access-8gtmr\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394494 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394528 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394612 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7b53b1-edd2-4468-88e9-7f9764aed161-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394759 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5j98\" (UniqueName: \"kubernetes.io/projected/f846627e-2b5c-4fed-8898-e734c9dbce9b-kube-api-access-g5j98\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394808 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-trusted-ca\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394854 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c394dc5f-ae74-4818-a62e-c824c9c546d1-serving-cert\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.394915 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.395039 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b53b1-edd2-4468-88e9-7f9764aed161-config\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.395096 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd8q9\" (UniqueName: \"kubernetes.io/projected/c394dc5f-ae74-4818-a62e-c824c9c546d1-kube-api-access-jd8q9\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.496655 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497478 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-plugins-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497525 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7938153c-4023-411c-883b-e0c61b8a955b-service-ca-bundle\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497547 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b4f7d75-0785-47c5-b13c-921a4c98781a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497583 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-config\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497600 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f90f3d6a-19c8-4559-88d7-983dc38398da-tmpfs\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497630 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmd26\" (UniqueName: \"kubernetes.io/projected/1ddce301-8043-4223-ad17-a83c4bae252e-kube-api-access-qmd26\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497647 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmss\" (UniqueName: \"kubernetes.io/projected/eb091d64-a21d-4e6b-82f3-96328d665c91-kube-api-access-5jmss\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497665 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gtmr\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-kube-api-access-8gtmr\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497681 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-mountpoint-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwbl\" (UniqueName: \"kubernetes.io/projected/dabefcba-95c1-4c47-b1aa-3265fe1fe046-kube-api-access-tpwbl\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497736 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6a3257-4cc9-4ec3-b951-10e040c611cd-proxy-tls\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497751 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-registration-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497789 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5j98\" (UniqueName: \"kubernetes.io/projected/f846627e-2b5c-4fed-8898-e734c9dbce9b-kube-api-access-g5j98\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497808 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ddce301-8043-4223-ad17-a83c4bae252e-signing-key\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497823 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ddce301-8043-4223-ad17-a83c4bae252e-signing-cabundle\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497863 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4b4\" (UniqueName: \"kubernetes.io/projected/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-kube-api-access-xz4b4\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497879 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zklrr\" (UniqueName: \"kubernetes.io/projected/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-kube-api-access-zklrr\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497903 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c394dc5f-ae74-4818-a62e-c824c9c546d1-serving-cert\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497918 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzzv\" (UniqueName: \"kubernetes.io/projected/f90f3d6a-19c8-4559-88d7-983dc38398da-kube-api-access-jhzzv\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497933 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-certs\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.497977 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498047 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd8q9\" (UniqueName: \"kubernetes.io/projected/c394dc5f-ae74-4818-a62e-c824c9c546d1-kube-api-access-jd8q9\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498062 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498078 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm54z\" (UniqueName: \"kubernetes.io/projected/58096f07-653d-4448-87af-35c02d2b4047-kube-api-access-lm54z\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498102 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b4f7d75-0785-47c5-b13c-921a4c98781a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498116 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-metrics-certs\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498130 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrr2m\" (UniqueName: \"kubernetes.io/projected/b015e246-f316-443a-adc9-56b69f929ae8-kube-api-access-mrr2m\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498196 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30482ca0-e995-45ad-9f4b-8bb30c6a044f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498240 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb091d64-a21d-4e6b-82f3-96328d665c91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498269 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f846627e-2b5c-4fed-8898-e734c9dbce9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498284 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4dk\" (UniqueName: \"kubernetes.io/projected/30482ca0-e995-45ad-9f4b-8bb30c6a044f-kube-api-access-ww4dk\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498301 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabefcba-95c1-4c47-b1aa-3265fe1fe046-config\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498318 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498364 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/751e1c50-72f6-4e71-a24c-58072b84ee39-trusted-ca\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498379 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498394 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-default-certificate\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498411 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/814f95d8-7d8e-4e74-93c1-9f372d7fdc6f-kube-api-access-nr25w\") pod \"migrator-59844c95c7-ffncs\" (UID: \"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4nw2\" (UniqueName: \"kubernetes.io/projected/48d56a70-84c8-41cb-ba71-e74768d42190-kube-api-access-n4nw2\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498453 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e1c50-72f6-4e71-a24c-58072b84ee39-metrics-tls\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b4f7d75-0785-47c5-b13c-921a4c98781a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498514 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498531 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30482ca0-e995-45ad-9f4b-8bb30c6a044f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498547 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-apiservice-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498561 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-webhook-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498578 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498601 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498618 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b015e246-f316-443a-adc9-56b69f929ae8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498634 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabefcba-95c1-4c47-b1aa-3265fe1fe046-serving-cert\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.498655 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:54.998639376 +0000 UTC m=+145.038697131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498696 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-socket-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498740 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498759 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498787 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7b53b1-edd2-4468-88e9-7f9764aed161-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498805 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7g2\" (UniqueName: \"kubernetes.io/projected/7938153c-4023-411c-883b-e0c61b8a955b-kube-api-access-sr7g2\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498821 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48d56a70-84c8-41cb-ba71-e74768d42190-config-volume\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498854 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48d56a70-84c8-41cb-ba71-e74768d42190-metrics-tls\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498880 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-srv-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498895 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-node-bootstrap-token\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498912 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-trusted-ca\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498929 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzrp\" (UniqueName: \"kubernetes.io/projected/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-kube-api-access-8nzrp\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498944 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-stats-auth\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498959 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-proxy-tls\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498983 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-srv-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.498998 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-profile-collector-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499014 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499029 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-images\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499045 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5lq\" (UniqueName: \"kubernetes.io/projected/bd6a3257-4cc9-4ec3-b951-10e040c611cd-kube-api-access-qn5lq\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499061 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd112a00-09af-441f-b41f-29e12865d2ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499087 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b53b1-edd2-4468-88e9-7f9764aed161-config\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499115 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd112a00-09af-441f-b41f-29e12865d2ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499132 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-csi-data-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499159 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-cert\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499175 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499190 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpmh\" (UniqueName: \"kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499205 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6lr\" (UniqueName: \"kubernetes.io/projected/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-kube-api-access-kk6lr\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499231 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499256 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjm5\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499293 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz99\" (UniqueName: \"kubernetes.io/projected/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-kube-api-access-5jz99\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499344 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef7b53b1-edd2-4468-88e9-7f9764aed161-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499362 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499411 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pt9c\" (UniqueName: \"kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.499427 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sj56\" (UniqueName: \"kubernetes.io/projected/fd112a00-09af-441f-b41f-29e12865d2ac-kube-api-access-2sj56\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.500385 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-config\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.500957 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.505771 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b53b1-edd2-4468-88e9-7f9764aed161-config\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.513225 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c394dc5f-ae74-4818-a62e-c824c9c546d1-trusted-ca\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.513599 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.514909 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/751e1c50-72f6-4e71-a24c-58072b84ee39-trusted-ca\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.515207 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.521987 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30482ca0-e995-45ad-9f4b-8bb30c6a044f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.523210 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.530223 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7b53b1-edd2-4468-88e9-7f9764aed161-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.530654 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c394dc5f-ae74-4818-a62e-c824c9c546d1-serving-cert\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.530882 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f846627e-2b5c-4fed-8898-e734c9dbce9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.530942 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.532238 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/751e1c50-72f6-4e71-a24c-58072b84ee39-metrics-tls\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.534879 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5j98\" (UniqueName: \"kubernetes.io/projected/f846627e-2b5c-4fed-8898-e734c9dbce9b-kube-api-access-g5j98\") pod \"control-plane-machine-set-operator-78cbb6b69f-htm2g\" (UID: \"f846627e-2b5c-4fed-8898-e734c9dbce9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.539706 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30482ca0-e995-45ad-9f4b-8bb30c6a044f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.539919 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gtmr\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-kube-api-access-8gtmr\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.563475 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4dk\" (UniqueName: \"kubernetes.io/projected/30482ca0-e995-45ad-9f4b-8bb30c6a044f-kube-api-access-ww4dk\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sv5v\" (UID: \"30482ca0-e995-45ad-9f4b-8bb30c6a044f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.566184 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd8q9\" (UniqueName: \"kubernetes.io/projected/c394dc5f-ae74-4818-a62e-c824c9c546d1-kube-api-access-jd8q9\") pod \"console-operator-58897d9998-tbxr8\" (UID: \"c394dc5f-ae74-4818-a62e-c824c9c546d1\") " pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.571092 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" event={"ID":"d9babe17-48df-46b7-9d27-a6698abfa7e7","Type":"ContainerStarted","Data":"a3a55ad897bdec8827f9a60bc0d862a5301895d41b14fd4af6bae3eae2c32eda"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.571137 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" event={"ID":"d9babe17-48df-46b7-9d27-a6698abfa7e7","Type":"ContainerStarted","Data":"f4e35e6689d2936d6276e4c88d00241942076971d816637283b392d7a4d07e68"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.580851 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/751e1c50-72f6-4e71-a24c-58072b84ee39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7f8vh\" (UID: \"751e1c50-72f6-4e71-a24c-58072b84ee39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.592992 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" event={"ID":"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16","Type":"ContainerStarted","Data":"8241f0f95583bd4a0b29d719ffeee8c24228300fb0175e5bf6c6388a3079657d"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.596724 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef7b53b1-edd2-4468-88e9-7f9764aed161-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z6grb\" (UID: \"ef7b53b1-edd2-4468-88e9-7f9764aed161\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600000 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600039 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-default-certificate\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600057 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/814f95d8-7d8e-4e74-93c1-9f372d7fdc6f-kube-api-access-nr25w\") pod \"migrator-59844c95c7-ffncs\" (UID: \"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600077 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4nw2\" (UniqueName: \"kubernetes.io/projected/48d56a70-84c8-41cb-ba71-e74768d42190-kube-api-access-n4nw2\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600093 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b4f7d75-0785-47c5-b13c-921a4c98781a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600109 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600127 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-apiservice-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600140 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-webhook-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600161 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b015e246-f316-443a-adc9-56b69f929ae8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600179 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabefcba-95c1-4c47-b1aa-3265fe1fe046-serving-cert\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600196 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-socket-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600214 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7g2\" (UniqueName: \"kubernetes.io/projected/7938153c-4023-411c-883b-e0c61b8a955b-kube-api-access-sr7g2\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600229 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48d56a70-84c8-41cb-ba71-e74768d42190-config-volume\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600243 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48d56a70-84c8-41cb-ba71-e74768d42190-metrics-tls\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600258 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-srv-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600291 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-node-bootstrap-token\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600311 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzrp\" (UniqueName: \"kubernetes.io/projected/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-kube-api-access-8nzrp\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600344 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-stats-auth\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600362 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-proxy-tls\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600378 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-srv-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600394 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-profile-collector-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600411 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-images\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600427 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5lq\" (UniqueName: \"kubernetes.io/projected/bd6a3257-4cc9-4ec3-b951-10e040c611cd-kube-api-access-qn5lq\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600445 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd112a00-09af-441f-b41f-29e12865d2ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600463 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd112a00-09af-441f-b41f-29e12865d2ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600480 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-csi-data-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600496 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-cert\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600515 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600532 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zpmh\" (UniqueName: \"kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600548 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6lr\" (UniqueName: \"kubernetes.io/projected/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-kube-api-access-kk6lr\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600591 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz99\" (UniqueName: \"kubernetes.io/projected/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-kube-api-access-5jz99\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600607 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600641 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pt9c\" (UniqueName: \"kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600657 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sj56\" (UniqueName: \"kubernetes.io/projected/fd112a00-09af-441f-b41f-29e12865d2ac-kube-api-access-2sj56\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600676 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600694 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-plugins-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600711 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7938153c-4023-411c-883b-e0c61b8a955b-service-ca-bundle\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600726 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b4f7d75-0785-47c5-b13c-921a4c98781a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600742 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f90f3d6a-19c8-4559-88d7-983dc38398da-tmpfs\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600759 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmd26\" (UniqueName: \"kubernetes.io/projected/1ddce301-8043-4223-ad17-a83c4bae252e-kube-api-access-qmd26\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600775 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmss\" (UniqueName: \"kubernetes.io/projected/eb091d64-a21d-4e6b-82f3-96328d665c91-kube-api-access-5jmss\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600790 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-mountpoint-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600814 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwbl\" (UniqueName: \"kubernetes.io/projected/dabefcba-95c1-4c47-b1aa-3265fe1fe046-kube-api-access-tpwbl\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600830 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6a3257-4cc9-4ec3-b951-10e040c611cd-proxy-tls\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600848 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-registration-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600864 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ddce301-8043-4223-ad17-a83c4bae252e-signing-key\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600879 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ddce301-8043-4223-ad17-a83c4bae252e-signing-cabundle\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600894 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4b4\" (UniqueName: \"kubernetes.io/projected/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-kube-api-access-xz4b4\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600908 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zklrr\" (UniqueName: \"kubernetes.io/projected/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-kube-api-access-zklrr\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600925 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzzv\" (UniqueName: \"kubernetes.io/projected/f90f3d6a-19c8-4559-88d7-983dc38398da-kube-api-access-jhzzv\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600940 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-certs\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600961 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600980 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.600995 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm54z\" (UniqueName: \"kubernetes.io/projected/58096f07-653d-4448-87af-35c02d2b4047-kube-api-access-lm54z\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601011 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b4f7d75-0785-47c5-b13c-921a4c98781a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601026 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-metrics-certs\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601041 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrr2m\" (UniqueName: \"kubernetes.io/projected/b015e246-f316-443a-adc9-56b69f929ae8-kube-api-access-mrr2m\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601066 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb091d64-a21d-4e6b-82f3-96328d665c91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601083 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601098 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabefcba-95c1-4c47-b1aa-3265fe1fe046-config\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601466 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48d56a70-84c8-41cb-ba71-e74768d42190-config-volume\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.601748 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabefcba-95c1-4c47-b1aa-3265fe1fe046-config\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.604851 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-csi-data-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.604980 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f90f3d6a-19c8-4559-88d7-983dc38398da-tmpfs\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.605001 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-mountpoint-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.610626 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-plugins-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.610824 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" event={"ID":"469892a0-464b-45d5-8152-53498212b9ac","Type":"ContainerStarted","Data":"4762bf36103fb52fb6ef08bfd6adfa95413780f9ffc8ef13aa3ad4fbc28242ef"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.610993 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-registration-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.611531 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.612275 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.613165 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.613506 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.113493442 +0000 UTC m=+145.153551197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.614778 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-images\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.615294 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-srv-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.615992 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-node-bootstrap-token\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.616912 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7938153c-4023-411c-883b-e0c61b8a955b-service-ca-bundle\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.617968 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ddce301-8043-4223-ad17-a83c4bae252e-signing-cabundle\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.619090 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd112a00-09af-441f-b41f-29e12865d2ac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.619972 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b4f7d75-0785-47c5-b13c-921a4c98781a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.621740 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-socket-dir\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.621743 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6a3257-4cc9-4ec3-b951-10e040c611cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.623779 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.624293 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-proxy-tls\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.624754 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.625574 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-cert\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.626318 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6a3257-4cc9-4ec3-b951-10e040c611cd-proxy-tls\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.626570 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-profile-collector-cert\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.626843 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-srv-cert\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.627069 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48d56a70-84c8-41cb-ba71-e74768d42190-metrics-tls\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.633040 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.633542 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b015e246-f316-443a-adc9-56b69f929ae8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.633764 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd112a00-09af-441f-b41f-29e12865d2ac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.633845 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b4f7d75-0785-47c5-b13c-921a4c98781a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.633888 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ddce301-8043-4223-ad17-a83c4bae252e-signing-key\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.634225 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/58096f07-653d-4448-87af-35c02d2b4047-certs\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.634243 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-default-certificate\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.634709 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb091d64-a21d-4e6b-82f3-96328d665c91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.635857 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mwnq" event={"ID":"59a2b9fd-ede9-4e85-8ad0-552716ecca00","Type":"ContainerStarted","Data":"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.635891 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mwnq" event={"ID":"59a2b9fd-ede9-4e85-8ad0-552716ecca00","Type":"ContainerStarted","Data":"b5bfe16215a339f45d9ddebe44bdd09f9d773e0a185297cf003f87147109bf13"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.636229 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-webhook-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.636528 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.638071 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-stats-auth\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.638092 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zcwz9" event={"ID":"323e297c-2d63-4230-8110-c7d9c9da3538","Type":"ContainerStarted","Data":"5d7a1a18ce451eca36e3c771e496999db98d64ad96dbc59caf700b49dbcc5491"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.638492 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644001 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644049 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644371 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabefcba-95c1-4c47-b1aa-3265fe1fe046-serving-cert\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644509 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644940 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f90f3d6a-19c8-4559-88d7-983dc38398da-apiservice-cert\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.644959 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7938153c-4023-411c-883b-e0c61b8a955b-metrics-certs\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.665130 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjm5\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.686467 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.702411 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.702949 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.702672 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.202645694 +0000 UTC m=+145.242703459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.708086 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzrp\" (UniqueName: \"kubernetes.io/projected/62ff01da-a638-4ea1-8d32-1ff8c230ecdc-kube-api-access-8nzrp\") pod \"catalog-operator-68c6474976-p8qql\" (UID: \"62ff01da-a638-4ea1-8d32-1ff8c230ecdc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.708893 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.711536 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.211514738 +0000 UTC m=+145.251572493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.715647 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.718985 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmd26\" (UniqueName: \"kubernetes.io/projected/1ddce301-8043-4223-ad17-a83c4bae252e-kube-api-access-qmd26\") pod \"service-ca-9c57cc56f-gbncm\" (UID: \"1ddce301-8043-4223-ad17-a83c4bae252e\") " pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.727881 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5lq\" (UniqueName: \"kubernetes.io/projected/bd6a3257-4cc9-4ec3-b951-10e040c611cd-kube-api-access-qn5lq\") pod \"machine-config-operator-74547568cd-ssxgp\" (UID: \"bd6a3257-4cc9-4ec3-b951-10e040c611cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.735773 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.758067 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" event={"ID":"e0ab8544-eee1-4ece-aecc-09ae3a228c3c","Type":"ContainerStarted","Data":"03d0bf0028752bbf890e2ba20ae7a2bc3e80db7b91da85206fad7a8209c22bec"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.758106 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" event={"ID":"71118688-df29-464b-a113-93f582f8ac6f","Type":"ContainerStarted","Data":"1008ac31242f4b817c87d6796eb2f4535421a27d6b40e67fa0a830edaf403d89"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.760069 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmss\" (UniqueName: \"kubernetes.io/projected/eb091d64-a21d-4e6b-82f3-96328d665c91-kube-api-access-5jmss\") pod \"package-server-manager-789f6589d5-nmbd7\" (UID: \"eb091d64-a21d-4e6b-82f3-96328d665c91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.763122 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwbl\" (UniqueName: \"kubernetes.io/projected/dabefcba-95c1-4c47-b1aa-3265fe1fe046-kube-api-access-tpwbl\") pod \"service-ca-operator-777779d784-d7hxw\" (UID: \"dabefcba-95c1-4c47-b1aa-3265fe1fe046\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.793404 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" event={"ID":"8beb277d-de0d-4076-b16c-8589f849f8de","Type":"ContainerStarted","Data":"cc1af3acbc7a8000d5118d97b14d151db479eae275fdab34cc15a7848713d0ef"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.793458 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" event={"ID":"8beb277d-de0d-4076-b16c-8589f849f8de","Type":"ContainerStarted","Data":"7eac64157c83ed927602a2d5564103feba4e7291d8ef450199418c53451dd53f"} Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.799215 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4b4\" (UniqueName: \"kubernetes.io/projected/0a6c3ee4-ea6a-47ac-af16-983d3265fdad-kube-api-access-xz4b4\") pod \"csi-hostpathplugin-h6l9p\" (UID: \"0a6c3ee4-ea6a-47ac-af16-983d3265fdad\") " pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.799825 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b4f7d75-0785-47c5-b13c-921a4c98781a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mksqz\" (UID: \"1b4f7d75-0785-47c5-b13c-921a4c98781a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.812085 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.812110 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.812731 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.31271095 +0000 UTC m=+145.352768705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.825550 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.835077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zklrr\" (UniqueName: \"kubernetes.io/projected/c39fb4f7-59e5-4d60-9fcb-bdc5744886c0-kube-api-access-zklrr\") pod \"machine-config-controller-84d6567774-m7c4t\" (UID: \"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.835821 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.845728 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.865659 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.866146 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzzv\" (UniqueName: \"kubernetes.io/projected/f90f3d6a-19c8-4559-88d7-983dc38398da-kube-api-access-jhzzv\") pod \"packageserver-d55dfcdfc-65hsz\" (UID: \"f90f3d6a-19c8-4559-88d7-983dc38398da\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.870756 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.871925 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.901010 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm54z\" (UniqueName: \"kubernetes.io/projected/58096f07-653d-4448-87af-35c02d2b4047-kube-api-access-lm54z\") pod \"machine-config-server-7z7rd\" (UID: \"58096f07-653d-4448-87af-35c02d2b4047\") " pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.913677 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zpmh\" (UniqueName: \"kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh\") pod \"marketplace-operator-79b997595-bgkbq\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.915025 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:54 crc kubenswrapper[4644]: E0204 08:43:54.926263 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.425944593 +0000 UTC m=+145.466002348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.934187 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.934358 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz99\" (UniqueName: \"kubernetes.io/projected/48ff7085-6976-40a6-a50f-b1d6cbbd90aa-kube-api-access-5jz99\") pod \"olm-operator-6b444d44fb-nbzsd\" (UID: \"48ff7085-6976-40a6-a50f-b1d6cbbd90aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.942851 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6lr\" (UniqueName: \"kubernetes.io/projected/3424ac8e-3024-48a5-b7d1-235b3ca9b5e6-kube-api-access-kk6lr\") pod \"ingress-canary-fwfsl\" (UID: \"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6\") " pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.943559 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7z7rd" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.958248 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pt9c\" (UniqueName: \"kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c\") pod \"collect-profiles-29503230-44vlf\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.973509 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" Feb 04 08:43:54 crc kubenswrapper[4644]: I0204 08:43:54.982622 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fwfsl" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.010685 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrr2m\" (UniqueName: \"kubernetes.io/projected/b015e246-f316-443a-adc9-56b69f929ae8-kube-api-access-mrr2m\") pod \"multus-admission-controller-857f4d67dd-spmn6\" (UID: \"b015e246-f316-443a-adc9-56b69f929ae8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.013269 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sj56\" (UniqueName: \"kubernetes.io/projected/fd112a00-09af-441f-b41f-29e12865d2ac-kube-api-access-2sj56\") pod \"kube-storage-version-migrator-operator-b67b599dd-tx5v9\" (UID: \"fd112a00-09af-441f-b41f-29e12865d2ac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.015899 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.017051 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.517012505 +0000 UTC m=+145.557070260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.017262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.018075 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.517989195 +0000 UTC m=+145.558046950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.026212 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4nw2\" (UniqueName: \"kubernetes.io/projected/48d56a70-84c8-41cb-ba71-e74768d42190-kube-api-access-n4nw2\") pod \"dns-default-6qfw4\" (UID: \"48d56a70-84c8-41cb-ba71-e74768d42190\") " pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.039203 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/814f95d8-7d8e-4e74-93c1-9f372d7fdc6f-kube-api-access-nr25w\") pod \"migrator-59844c95c7-ffncs\" (UID: \"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.054934 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.085635 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.091141 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7g2\" (UniqueName: \"kubernetes.io/projected/7938153c-4023-411c-883b-e0c61b8a955b-kube-api-access-sr7g2\") pod \"router-default-5444994796-zjdj8\" (UID: \"7938153c-4023-411c-883b-e0c61b8a955b\") " pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.091602 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.118673 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.119192 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.619166527 +0000 UTC m=+145.659224282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.136292 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.200531 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.209178 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.228248 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.229041 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.729018959 +0000 UTC m=+145.769076714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.235847 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.264061 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6qfw4" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.332045 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.332156 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.832133791 +0000 UTC m=+145.872191546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.332727 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.333083 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.833070261 +0000 UTC m=+145.873128016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.352978 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9n8df"] Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.354690 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv"] Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.375522 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.383411 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9"] Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.409579 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859"] Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.432705 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2mwnq" podStartSLOduration=124.43266146 podStartE2EDuration="2m4.43266146s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:55.432261611 +0000 UTC m=+145.472319366" watchObservedRunningTime="2026-02-04 08:43:55.43266146 +0000 UTC m=+145.472719215" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.433636 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.433962 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.933941676 +0000 UTC m=+145.973999431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.434015 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.434962 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:55.934946357 +0000 UTC m=+145.975004112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.541034 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.541158 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.041127672 +0000 UTC m=+146.081185427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.543143 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.543723 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.043705596 +0000 UTC m=+146.083763351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.644834 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.645241 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.145224235 +0000 UTC m=+146.185281990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.664143 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tbxr8"] Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.743940 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" podStartSLOduration=124.743905816 podStartE2EDuration="2m4.743905816s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:55.693410367 +0000 UTC m=+145.733468132" watchObservedRunningTime="2026-02-04 08:43:55.743905816 +0000 UTC m=+145.783963571" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.746300 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.747728 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.247712404 +0000 UTC m=+146.287770159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.784246 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g"] Feb 04 08:43:55 crc kubenswrapper[4644]: W0204 08:43:55.803487 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc394dc5f_ae74_4818_a62e_c824c9c546d1.slice/crio-44bdb6f1dc39a4be3398139a62a44771f2706ff42b3b7757ddc85ef48f8a5584 WatchSource:0}: Error finding container 44bdb6f1dc39a4be3398139a62a44771f2706ff42b3b7757ddc85ef48f8a5584: Status 404 returned error can't find the container with id 44bdb6f1dc39a4be3398139a62a44771f2706ff42b3b7757ddc85ef48f8a5584 Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.832406 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" event={"ID":"d32778c2-ba6d-4b8b-a66f-8b86fc1d0e16","Type":"ContainerStarted","Data":"80266c8db99b99abe7c85707a69146d9d8be9075a308eff3009f4f994a0d07aa"} Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.848180 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.849430 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.349413317 +0000 UTC m=+146.389471072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.876010 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" event={"ID":"8beb277d-de0d-4076-b16c-8589f849f8de","Type":"ContainerStarted","Data":"2e244fb3774d6f627d8ec51bc0714dbb0efd23d1b6acee03262e2beb2bd01d7a"} Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.907426 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" event={"ID":"2e271786-29bb-4576-b75a-23568a8b8ae0","Type":"ContainerStarted","Data":"f19ce4223312dad283a115401d2eceddf0d6ce2d6bb513c354bc1ab8fd780d5e"} Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.942434 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cc6f4" podStartSLOduration=124.94241932 podStartE2EDuration="2m4.94241932s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:55.940455519 +0000 UTC m=+145.980513274" watchObservedRunningTime="2026-02-04 08:43:55.94241932 +0000 UTC m=+145.982477075" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.950945 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:55 crc kubenswrapper[4644]: E0204 08:43:55.952001 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.451988128 +0000 UTC m=+146.492045883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.955521 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh"] Feb 04 08:43:55 crc kubenswrapper[4644]: W0204 08:43:55.958388 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf846627e_2b5c_4fed_8898_e734c9dbce9b.slice/crio-99c15311ae56af40da733ef0580e4a84674b79daa25d80379acc15a7934a9d4b WatchSource:0}: Error finding container 99c15311ae56af40da733ef0580e4a84674b79daa25d80379acc15a7934a9d4b: Status 404 returned error can't find the container with id 99c15311ae56af40da733ef0580e4a84674b79daa25d80379acc15a7934a9d4b Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.959065 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" event={"ID":"469892a0-464b-45d5-8152-53498212b9ac","Type":"ContainerStarted","Data":"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5"} Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.959859 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.981412 4644 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fxwlv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.981461 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" podUID="469892a0-464b-45d5-8152-53498212b9ac" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.989794 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zcwz9" event={"ID":"323e297c-2d63-4230-8110-c7d9c9da3538","Type":"ContainerStarted","Data":"24a07324b24e79e1852685b839a9cd54c9285f6b5c36eb6facd6819ba3c7059c"} Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.990604 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.990645 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.997771 4644 generic.go:334] "Generic (PLEG): container finished" podID="e0ab8544-eee1-4ece-aecc-09ae3a228c3c" containerID="12ab84207a688dcf43eefcb470417e7607c72a1dbd8979a0a3777de509a41828" exitCode=0 Feb 04 08:43:55 crc kubenswrapper[4644]: I0204 08:43:55.997843 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" event={"ID":"e0ab8544-eee1-4ece-aecc-09ae3a228c3c","Type":"ContainerDied","Data":"12ab84207a688dcf43eefcb470417e7607c72a1dbd8979a0a3777de509a41828"} Feb 04 08:43:55 crc kubenswrapper[4644]: W0204 08:43:55.999044 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58096f07_653d_4448_87af_35c02d2b4047.slice/crio-13d875edd195efa9001b54eabf72c2d73375dbc2f4f0ab3c62d28a67f8d8b240 WatchSource:0}: Error finding container 13d875edd195efa9001b54eabf72c2d73375dbc2f4f0ab3c62d28a67f8d8b240: Status 404 returned error can't find the container with id 13d875edd195efa9001b54eabf72c2d73375dbc2f4f0ab3c62d28a67f8d8b240 Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.008017 4644 generic.go:334] "Generic (PLEG): container finished" podID="71118688-df29-464b-a113-93f582f8ac6f" containerID="dd84484d25b4cd9df582167c41ae27fd4deed92c88c9f20f37a99cc2e38dbdb8" exitCode=0 Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.008150 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" event={"ID":"71118688-df29-464b-a113-93f582f8ac6f","Type":"ContainerDied","Data":"dd84484d25b4cd9df582167c41ae27fd4deed92c88c9f20f37a99cc2e38dbdb8"} Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.010138 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" event={"ID":"1380462d-7e7c-4c20-859c-4132b703369e","Type":"ContainerStarted","Data":"3bc7bcad5f555b1b7b036679a237011ff4b7bdad1e743e24bc21b0700b18e7e9"} Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.020810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" event={"ID":"b5739acd-dba1-466b-9397-fba070e97c71","Type":"ContainerStarted","Data":"a2aa0e9ed8c81c0b3a5db8b05ae67dbb14cd36d9c41d4854fdacf235dea6a91c"} Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.023620 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" event={"ID":"c83f1fe0-8bb7-4938-849c-8c199a9fef3b","Type":"ContainerStarted","Data":"84821a511844b5b67c0dc24fc04b85e3d03e1eace40caddc4b62ac2d6c4e9272"} Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.057652 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.059139 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.559124393 +0000 UTC m=+146.599182148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.098108 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb"] Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.119488 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b897n" podStartSLOduration=125.119466987 podStartE2EDuration="2m5.119466987s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:56.115495794 +0000 UTC m=+146.155553559" watchObservedRunningTime="2026-02-04 08:43:56.119466987 +0000 UTC m=+146.159524742" Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.162541 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.162888 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.662873519 +0000 UTC m=+146.702931274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: W0204 08:43:56.189632 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef7b53b1_edd2_4468_88e9_7f9764aed161.slice/crio-f6d5d357b176f4c89ff87147b8243db35135276bf2e8d9ed18aacd8974113686 WatchSource:0}: Error finding container f6d5d357b176f4c89ff87147b8243db35135276bf2e8d9ed18aacd8974113686: Status 404 returned error can't find the container with id f6d5d357b176f4c89ff87147b8243db35135276bf2e8d9ed18aacd8974113686 Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.269689 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.270823 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.770795701 +0000 UTC m=+146.810853456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.339372 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp"] Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.376624 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.386215 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.886192308 +0000 UTC m=+146.926250063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.485053 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.485371 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.985348378 +0000 UTC m=+147.025406133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.485786 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.491455 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:56.991431405 +0000 UTC m=+147.031489160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.591900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.592238 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.092205277 +0000 UTC m=+147.132263032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.593523 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.593905 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.093897073 +0000 UTC m=+147.133954828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.693878 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.694151 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.194135865 +0000 UTC m=+147.234193620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.699888 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vw5x9" podStartSLOduration=125.699862294 podStartE2EDuration="2m5.699862294s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:56.691634753 +0000 UTC m=+146.731692518" watchObservedRunningTime="2026-02-04 08:43:56.699862294 +0000 UTC m=+146.739920039" Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.720966 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v"] Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.795146 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.796246 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.296215355 +0000 UTC m=+147.336273270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.898987 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:56 crc kubenswrapper[4644]: E0204 08:43:56.899292 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.399275106 +0000 UTC m=+147.439332862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.919010 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" podStartSLOduration=125.918983496 podStartE2EDuration="2m5.918983496s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:56.899616913 +0000 UTC m=+146.939674668" watchObservedRunningTime="2026-02-04 08:43:56.918983496 +0000 UTC m=+146.959041251" Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.949172 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" podStartSLOduration=125.949148403 podStartE2EDuration="2m5.949148403s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:56.94036459 +0000 UTC m=+146.980422365" watchObservedRunningTime="2026-02-04 08:43:56.949148403 +0000 UTC m=+146.989206158" Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.983896 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zcwz9" podStartSLOduration=125.983875934 podStartE2EDuration="2m5.983875934s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:56.978453061 +0000 UTC m=+147.018510826" watchObservedRunningTime="2026-02-04 08:43:56.983875934 +0000 UTC m=+147.023933689" Feb 04 08:43:56 crc kubenswrapper[4644]: I0204 08:43:56.999890 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.000202 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.500190213 +0000 UTC m=+147.540247978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: W0204 08:43:57.090946 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30482ca0_e995_45ad_9f4b_8bb30c6a044f.slice/crio-d0c343b2c746066bf2d98c1b60044c1bd5819aba916b9d687f00b45d0d269836 WatchSource:0}: Error finding container d0c343b2c746066bf2d98c1b60044c1bd5819aba916b9d687f00b45d0d269836: Status 404 returned error can't find the container with id d0c343b2c746066bf2d98c1b60044c1bd5819aba916b9d687f00b45d0d269836 Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.091374 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" podStartSLOduration=126.091361987 podStartE2EDuration="2m6.091361987s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.091178503 +0000 UTC m=+147.131236248" watchObservedRunningTime="2026-02-04 08:43:57.091361987 +0000 UTC m=+147.131419742" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.101075 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.101447 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.601432196 +0000 UTC m=+147.641489951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.141552 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" event={"ID":"bd6a3257-4cc9-4ec3-b951-10e040c611cd","Type":"ContainerStarted","Data":"bd4a4589a2b826e455b6f9a95646855e359ae09ed4ad5ffdcb709804ea6ecba8"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.157483 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" event={"ID":"751e1c50-72f6-4e71-a24c-58072b84ee39","Type":"ContainerStarted","Data":"d7c8bfe6b6da525d8d78edfda48e057d5eeac0d3d240152c51e03d09aa31d96c"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.185024 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s9vpz" podStartSLOduration=126.185003802 podStartE2EDuration="2m6.185003802s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.127026147 +0000 UTC m=+147.167083932" watchObservedRunningTime="2026-02-04 08:43:57.185003802 +0000 UTC m=+147.225061557" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.191718 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" event={"ID":"f846627e-2b5c-4fed-8898-e734c9dbce9b","Type":"ContainerStarted","Data":"99c15311ae56af40da733ef0580e4a84674b79daa25d80379acc15a7934a9d4b"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.203134 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.206718 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.706697513 +0000 UTC m=+147.746755268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.227560 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" event={"ID":"ef7b53b1-edd2-4468-88e9-7f9764aed161","Type":"ContainerStarted","Data":"f6d5d357b176f4c89ff87147b8243db35135276bf2e8d9ed18aacd8974113686"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.233796 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z6cqz" podStartSLOduration=126.233778676 podStartE2EDuration="2m6.233778676s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.232586921 +0000 UTC m=+147.272644676" watchObservedRunningTime="2026-02-04 08:43:57.233778676 +0000 UTC m=+147.273836431" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.303939 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" event={"ID":"1380462d-7e7c-4c20-859c-4132b703369e","Type":"ContainerStarted","Data":"6d80e66aef07ddaded220012f30f156e9a25ca99bcd6ac00cb29ff4e84ad2eaa"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.352549 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.353563 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.853547453 +0000 UTC m=+147.893605208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.454087 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.454945 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:57.954932169 +0000 UTC m=+147.994989924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.471101 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" event={"ID":"c83f1fe0-8bb7-4938-849c-8c199a9fef3b","Type":"ContainerStarted","Data":"01a024f8dcab95ae9a4a4a8decc6f99eab085fee7199b3fd79bab8a21eed8bb9"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.476673 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.563488 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.563809 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.563882 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.563970 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.563996 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.565030 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.065009337 +0000 UTC m=+148.105067102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.576513 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.588668 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zjdj8" event={"ID":"7938153c-4023-411c-883b-e0c61b8a955b","Type":"ContainerStarted","Data":"6bae9b93a1fe25d3acfa6a8944c0faf501df257b605e61bff7762ec8e1f700f6"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.591161 4644 csr.go:261] certificate signing request csr-jm2w2 is approved, waiting to be issued Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.598261 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5jdz9" podStartSLOduration=126.598239527 podStartE2EDuration="2m6.598239527s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.539797563 +0000 UTC m=+147.579855318" watchObservedRunningTime="2026-02-04 08:43:57.598239527 +0000 UTC m=+147.638297282" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.601669 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.602822 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.610490 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.611446 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-spmn6"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.625130 4644 csr.go:257] certificate signing request csr-jm2w2 is issued Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.649545 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7z7rd" event={"ID":"58096f07-653d-4448-87af-35c02d2b4047","Type":"ContainerStarted","Data":"13d875edd195efa9001b54eabf72c2d73375dbc2f4f0ab3c62d28a67f8d8b240"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.665262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.668420 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.168404875 +0000 UTC m=+148.208462630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.682828 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" event={"ID":"c394dc5f-ae74-4818-a62e-c824c9c546d1","Type":"ContainerStarted","Data":"5f90aba290153e78df2254a6141b1148921a6ad136b18d1373151142fdf2a3d9"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.682866 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" event={"ID":"c394dc5f-ae74-4818-a62e-c824c9c546d1","Type":"ContainerStarted","Data":"44bdb6f1dc39a4be3398139a62a44771f2706ff42b3b7757ddc85ef48f8a5584"} Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.683448 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.700155 4644 patch_prober.go:28] interesting pod/console-operator-58897d9998-tbxr8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.700208 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" podUID="c394dc5f-ae74-4818-a62e-c824c9c546d1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.700447 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.700489 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.701811 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zjdj8" podStartSLOduration=126.701798468 podStartE2EDuration="2m6.701798468s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.647223224 +0000 UTC m=+147.687280979" watchObservedRunningTime="2026-02-04 08:43:57.701798468 +0000 UTC m=+147.741856223" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.703242 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.754866 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" podStartSLOduration=126.754848491 podStartE2EDuration="2m6.754848491s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:57.745107388 +0000 UTC m=+147.785165143" watchObservedRunningTime="2026-02-04 08:43:57.754848491 +0000 UTC m=+147.794906246" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.755395 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fwfsl"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.767949 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.768315 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.268283019 +0000 UTC m=+148.308340774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.770008 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.770316 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.270308011 +0000 UTC m=+148.310365766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.827880 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.870583 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.871041 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.371021574 +0000 UTC m=+148.411079329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.877660 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.887198 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.899796 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.907286 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7"] Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.907894 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 08:43:57 crc kubenswrapper[4644]: I0204 08:43:57.972114 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:57 crc kubenswrapper[4644]: E0204 08:43:57.972421 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.47240881 +0000 UTC m=+148.512466565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.022808 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6qfw4"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.075954 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.076250 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.576234017 +0000 UTC m=+148.616291772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.121673 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.123023 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gbncm"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.138029 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.178686 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.178980 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.678969012 +0000 UTC m=+148.719026767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.235021 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bgkbq"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.235866 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.278772 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.279206 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.279444 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.779427669 +0000 UTC m=+148.819485424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: W0204 08:43:58.293059 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aa7cc3c_1068_4427_a0c2_24e952c5ed2c.slice/crio-aa7d4ec95ebb3ddaac43ab84ca1ac75daf9f4faef66c921f30cb1065716ed63e WatchSource:0}: Error finding container aa7d4ec95ebb3ddaac43ab84ca1ac75daf9f4faef66c921f30cb1065716ed63e: Status 404 returned error can't find the container with id aa7d4ec95ebb3ddaac43ab84ca1ac75daf9f4faef66c921f30cb1065716ed63e Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.296421 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h6l9p"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.354115 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.376669 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.378208 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.378245 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.380362 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.390894 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.890846663 +0000 UTC m=+148.930904418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.481882 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.482670 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:58.98264808 +0000 UTC m=+149.022705835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.558273 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd"] Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.583314 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.583656 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.083644068 +0000 UTC m=+149.123701813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.626565 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-04 08:38:57 +0000 UTC, rotation deadline is 2026-11-17 03:02:33.623022359 +0000 UTC Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.626605 4644 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6858h18m34.996419848s for next certificate rotation Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.684118 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.684671 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.184654506 +0000 UTC m=+149.224712261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.783617 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" event={"ID":"2e271786-29bb-4576-b75a-23568a8b8ae0","Type":"ContainerStarted","Data":"3679d2672c49ea391a616414d2cae49a60ace9351c1ee2c10f0d0c7b54a13b13"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.789878 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.790377 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.290360283 +0000 UTC m=+149.330418038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.840421 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" event={"ID":"1b4f7d75-0785-47c5-b13c-921a4c98781a","Type":"ContainerStarted","Data":"30473a11282846c7443c4173619ece8779246f47c3da37435821fa51409b0e6f"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.876215 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" event={"ID":"0a6c3ee4-ea6a-47ac-af16-983d3265fdad","Type":"ContainerStarted","Data":"f707aacd8436eb7c235f80966c965e8e93b64fd3ee8fb76164602811a28d96cd"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.890895 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" event={"ID":"30482ca0-e995-45ad-9f4b-8bb30c6a044f","Type":"ContainerStarted","Data":"7ffe0f7f93122b3558ffd6515a959ad96b538fdb86a92f6b71626b1ea47bad1b"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.890938 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" event={"ID":"30482ca0-e995-45ad-9f4b-8bb30c6a044f","Type":"ContainerStarted","Data":"d0c343b2c746066bf2d98c1b60044c1bd5819aba916b9d687f00b45d0d269836"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.891704 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:58 crc kubenswrapper[4644]: E0204 08:43:58.892138 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.392121936 +0000 UTC m=+149.432179691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.926829 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sv5v" podStartSLOduration=127.926806237 podStartE2EDuration="2m7.926806237s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:58.925143862 +0000 UTC m=+148.965201617" watchObservedRunningTime="2026-02-04 08:43:58.926806237 +0000 UTC m=+148.966863992" Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.991363 4644 generic.go:334] "Generic (PLEG): container finished" podID="1380462d-7e7c-4c20-859c-4132b703369e" containerID="6d80e66aef07ddaded220012f30f156e9a25ca99bcd6ac00cb29ff4e84ad2eaa" exitCode=0 Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.991667 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" event={"ID":"1380462d-7e7c-4c20-859c-4132b703369e","Type":"ContainerDied","Data":"6d80e66aef07ddaded220012f30f156e9a25ca99bcd6ac00cb29ff4e84ad2eaa"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.991690 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" event={"ID":"1380462d-7e7c-4c20-859c-4132b703369e","Type":"ContainerStarted","Data":"cf9606e7e59d28f34bb360dc085649bb5f264e1e4009ca065ca558b1bcc5221d"} Feb 04 08:43:58 crc kubenswrapper[4644]: I0204 08:43:58.992184 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:58.999692 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.000861 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.500847025 +0000 UTC m=+149.540904800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.050814 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" podStartSLOduration=128.050794593 podStartE2EDuration="2m8.050794593s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.027781855 +0000 UTC m=+149.067839640" watchObservedRunningTime="2026-02-04 08:43:59.050794593 +0000 UTC m=+149.090852348" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.065155 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" event={"ID":"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0","Type":"ContainerStarted","Data":"4dbee9c150bc9913d6942845f99cee2a8cf8731aefa454f5d55a7c06841e5879"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.065190 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" event={"ID":"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0","Type":"ContainerStarted","Data":"a4908c5719b0f2e2f1bee1076a775007b3c95d8a7794ad360b4a8da48d2a1c67"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.105021 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.108931 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.60890066 +0000 UTC m=+149.648958405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.151993 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" event={"ID":"f846627e-2b5c-4fed-8898-e734c9dbce9b","Type":"ContainerStarted","Data":"e8489726490dda465d724aad6b57db3c3c3c6751259de5a938dea287df01249e"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.194215 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htm2g" podStartSLOduration=128.194198832 podStartE2EDuration="2m8.194198832s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.192735122 +0000 UTC m=+149.232792877" watchObservedRunningTime="2026-02-04 08:43:59.194198832 +0000 UTC m=+149.234256587" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.207838 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.208213 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.708199123 +0000 UTC m=+149.748256888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.214742 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" event={"ID":"bd6a3257-4cc9-4ec3-b951-10e040c611cd","Type":"ContainerStarted","Data":"6a418ee5909baf8d93fa8744d885934c0589fc045d02f6bbce0e24e57d5836d7"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.214786 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" event={"ID":"bd6a3257-4cc9-4ec3-b951-10e040c611cd","Type":"ContainerStarted","Data":"07543591f753bf7b3647ebbc3dd5828de02cbbed098793a5f4e1afbccdfd6962"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.244775 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fwfsl" event={"ID":"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6","Type":"ContainerStarted","Data":"3d2f628394f5d292c7436fd35fb2f8bc596f78a7aa9bb70cba9a60e5537ac28d"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.244831 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fwfsl" event={"ID":"3424ac8e-3024-48a5-b7d1-235b3ca9b5e6","Type":"ContainerStarted","Data":"bb9f4fe11175e20c6f56923b6d167b43c7f18449ae17adeb6a46246298719aa7"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.281590 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7z7rd" event={"ID":"58096f07-653d-4448-87af-35c02d2b4047","Type":"ContainerStarted","Data":"a125798fa5a86b24200504474290bbb17aeea842f17313d4ba4cb65d5956baf3"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.283676 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" event={"ID":"ef7b53b1-edd2-4468-88e9-7f9764aed161","Type":"ContainerStarted","Data":"cdebee8a82abc79e742e29bc955ba9b6df26f925ea0fe5e120fff2fd0a2ec865"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.300307 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ssxgp" podStartSLOduration=128.300288796 podStartE2EDuration="2m8.300288796s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.255679039 +0000 UTC m=+149.295736794" watchObservedRunningTime="2026-02-04 08:43:59.300288796 +0000 UTC m=+149.340346551" Feb 04 08:43:59 crc kubenswrapper[4644]: W0204 08:43:59.305249 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5828c9405107087c756af0c4a5b16af2e7ed2d6ac2743ec974ef8e485bf8a472 WatchSource:0}: Error finding container 5828c9405107087c756af0c4a5b16af2e7ed2d6ac2743ec974ef8e485bf8a472: Status 404 returned error can't find the container with id 5828c9405107087c756af0c4a5b16af2e7ed2d6ac2743ec974ef8e485bf8a472 Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.315289 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.315562 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.815522323 +0000 UTC m=+149.855580078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.315814 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.316818 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.816804609 +0000 UTC m=+149.856862364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.343060 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" event={"ID":"751e1c50-72f6-4e71-a24c-58072b84ee39","Type":"ContainerStarted","Data":"05cdcc0444872a7f263f11bc3b0bce97a23ddfde3d20e2fd6b957317b9d865a6"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.343110 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" event={"ID":"751e1c50-72f6-4e71-a24c-58072b84ee39","Type":"ContainerStarted","Data":"112e6ed21d7983625513016eab2413ff1ae57cb10418926ed0222d0c781d0caf"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.345102 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7z7rd" podStartSLOduration=8.345082877 podStartE2EDuration="8.345082877s" podCreationTimestamp="2026-02-04 08:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.345079237 +0000 UTC m=+149.385136992" watchObservedRunningTime="2026-02-04 08:43:59.345082877 +0000 UTC m=+149.385140632" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.345186 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fwfsl" podStartSLOduration=7.3451825490000004 podStartE2EDuration="7.345182549s" podCreationTimestamp="2026-02-04 08:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.299594592 +0000 UTC m=+149.339652347" watchObservedRunningTime="2026-02-04 08:43:59.345182549 +0000 UTC m=+149.385240304" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.347736 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" event={"ID":"eb091d64-a21d-4e6b-82f3-96328d665c91","Type":"ContainerStarted","Data":"0bd0921c43ca64aa8bdb7badd9818260a177d36944f31e18f76994366f675246"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.348692 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" event={"ID":"b015e246-f316-443a-adc9-56b69f929ae8","Type":"ContainerStarted","Data":"23fad810f99e8940b473b216ad8b1b4a4bfe22acb0747b435527be68bca541f2"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.348710 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" event={"ID":"b015e246-f316-443a-adc9-56b69f929ae8","Type":"ContainerStarted","Data":"0ffb6201e9d6e163ff197498771cd689a8f6d6d70b209cccedfcbd0a65784f5e"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.349843 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" event={"ID":"b5739acd-dba1-466b-9397-fba070e97c71","Type":"ContainerStarted","Data":"0920bf5965a563073188075c0b7b542be06897295c5e1b9263b2390378c44ce8"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.363198 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" event={"ID":"fd112a00-09af-441f-b41f-29e12865d2ac","Type":"ContainerStarted","Data":"231566a9e8f2dcb70d937982d40b414c17203fca82f0c79bc95019b50a0ba16b"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.364450 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" event={"ID":"aa2f8af2-85ef-4b5d-a95a-194c6a05a501","Type":"ContainerStarted","Data":"7d0fe87615fa2d4730f3f0cd338cb38e3afeb898d17d58c902d4e88f3ec2ec70"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.366014 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" event={"ID":"48ff7085-6976-40a6-a50f-b1d6cbbd90aa","Type":"ContainerStarted","Data":"22e17c80d524ea22aa57ecdb8b08cad76bd27ed945fa75239ceba151b37c2669"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.368619 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" event={"ID":"f90f3d6a-19c8-4559-88d7-983dc38398da","Type":"ContainerStarted","Data":"3a2ecffbe6ff5c37211b7ab3abc2e1d7bdcd4841b1e9bc5c13d5207341e755f0"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.369919 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.382069 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z6grb" podStartSLOduration=128.382057115 podStartE2EDuration="2m8.382057115s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.368051684 +0000 UTC m=+149.408109439" watchObservedRunningTime="2026-02-04 08:43:59.382057115 +0000 UTC m=+149.422114870" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.383409 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6qfw4" event={"ID":"48d56a70-84c8-41cb-ba71-e74768d42190","Type":"ContainerStarted","Data":"38b33ed969a472c179e473b97ec7d17b0d08a79bc7051f3c949f51a85f54f827"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.383502 4644 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-65hsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.383526 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" podUID="f90f3d6a-19c8-4559-88d7-983dc38398da" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.401995 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:43:59 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:43:59 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:43:59 crc kubenswrapper[4644]: healthz check failed Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.402051 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.402754 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-47859" podStartSLOduration=128.402737205 podStartE2EDuration="2m8.402737205s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.383273 +0000 UTC m=+149.423330755" watchObservedRunningTime="2026-02-04 08:43:59.402737205 +0000 UTC m=+149.442794960" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.407978 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7f8vh" podStartSLOduration=128.407963984 podStartE2EDuration="2m8.407963984s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.402575342 +0000 UTC m=+149.442633097" watchObservedRunningTime="2026-02-04 08:43:59.407963984 +0000 UTC m=+149.448021739" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.423937 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.425055 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" podStartSLOduration=128.425037678 podStartE2EDuration="2m8.425037678s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.423310242 +0000 UTC m=+149.463367997" watchObservedRunningTime="2026-02-04 08:43:59.425037678 +0000 UTC m=+149.465095423" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.425256 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:43:59.925239152 +0000 UTC m=+149.965296907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.426021 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" event={"ID":"dabefcba-95c1-4c47-b1aa-3265fe1fe046","Type":"ContainerStarted","Data":"399c71e4ddf177e4839135984f86ee87d8d1945b006c0b26eef63be0496294c1"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.432038 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" event={"ID":"71118688-df29-464b-a113-93f582f8ac6f","Type":"ContainerStarted","Data":"5b01849d8f37470140a9c5c07eb01143912fb1796ee09578c6bd73f7a8cefe15"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.433462 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zjdj8" event={"ID":"7938153c-4023-411c-883b-e0c61b8a955b","Type":"ContainerStarted","Data":"dd332cbcb523d1c51af0adc90f18b81f5d2b7d178a3d96c9c0bbc896d1996294"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.434381 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" event={"ID":"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c","Type":"ContainerStarted","Data":"aa7d4ec95ebb3ddaac43ab84ca1ac75daf9f4faef66c921f30cb1065716ed63e"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.435234 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" event={"ID":"1ddce301-8043-4223-ad17-a83c4bae252e","Type":"ContainerStarted","Data":"de470e0aba5f40b7962b33a7d3e1109dd67e34a39ba17f560d5b3a0bf8206b1e"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.437750 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" event={"ID":"62ff01da-a638-4ea1-8d32-1ff8c230ecdc","Type":"ContainerStarted","Data":"81c3fdd3370d6a8bb2409ff58534a9a6abc5c69260cfa3f4c3583d107f645173"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.437775 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" event={"ID":"62ff01da-a638-4ea1-8d32-1ff8c230ecdc","Type":"ContainerStarted","Data":"5f8921e6a2a13ea7b5f6f9650538c359398f391fb9e29d509dd917427dd54eb0"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.439802 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.445318 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" podStartSLOduration=128.445304959 podStartE2EDuration="2m8.445304959s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.440085371 +0000 UTC m=+149.480143126" watchObservedRunningTime="2026-02-04 08:43:59.445304959 +0000 UTC m=+149.485362714" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.446976 4644 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-p8qql container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.447010 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" podUID="62ff01da-a638-4ea1-8d32-1ff8c230ecdc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.449375 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" event={"ID":"e0ab8544-eee1-4ece-aecc-09ae3a228c3c","Type":"ContainerStarted","Data":"32c45be86e234a8a9a0613eec704d79afac062f6e4e8e60412f77a47d0be19ae"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.451579 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" event={"ID":"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f","Type":"ContainerStarted","Data":"56cb695131bd3cfd413af38952c8d7c9c7796ba09f6cda1138035474a8d0a670"} Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.601893 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" podStartSLOduration=128.601875642 podStartE2EDuration="2m8.601875642s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.596911088 +0000 UTC m=+149.636968843" watchObservedRunningTime="2026-02-04 08:43:59.601875642 +0000 UTC m=+149.641933397" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.620470 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.631542 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.131520037 +0000 UTC m=+150.171577792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.643318 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" podStartSLOduration=128.643294013 podStartE2EDuration="2m8.643294013s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:43:59.629636119 +0000 UTC m=+149.669693894" watchObservedRunningTime="2026-02-04 08:43:59.643294013 +0000 UTC m=+149.683351768" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.725784 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.726203 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.226186574 +0000 UTC m=+150.266244329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.831810 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.832318 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.332300668 +0000 UTC m=+150.372358423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.856672 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tbxr8" Feb 04 08:43:59 crc kubenswrapper[4644]: I0204 08:43:59.933902 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:43:59 crc kubenswrapper[4644]: E0204 08:43:59.934214 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.434197445 +0000 UTC m=+150.474255200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.036161 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.036631 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.536616823 +0000 UTC m=+150.576674578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.136744 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.137006 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.636977328 +0000 UTC m=+150.677035083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.137601 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.137936 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.637923937 +0000 UTC m=+150.677981692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.238228 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.238556 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.738515077 +0000 UTC m=+150.778572832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.238647 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.239378 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.739369525 +0000 UTC m=+150.779427280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.339609 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.339902 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.839882753 +0000 UTC m=+150.879940508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.382729 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:00 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:00 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:00 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.383187 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.441989 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.442405 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:00.942385393 +0000 UTC m=+150.982443148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.484156 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6qfw4" event={"ID":"48d56a70-84c8-41cb-ba71-e74768d42190","Type":"ContainerStarted","Data":"ccc0ae1d82c6855dad954e12ecbc8722809ed1a4824dc5bb74b80325a3dced00"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.488466 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" event={"ID":"fd112a00-09af-441f-b41f-29e12865d2ac","Type":"ContainerStarted","Data":"f81ebf4c2aed5d986e412f67ec589ced8151b3a94cd9eb4a7091c546418bc623"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.494185 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" event={"ID":"48ff7085-6976-40a6-a50f-b1d6cbbd90aa","Type":"ContainerStarted","Data":"4e2ab79b5975f49ecd670d2b85beaea0ed3e4a3843ad4b251cb547a06fcfdbec"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.494952 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.511967 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" event={"ID":"2e271786-29bb-4576-b75a-23568a8b8ae0","Type":"ContainerStarted","Data":"bf720037f645efc34d6734c072757bfc82636baa6c63d63edda390d9cab2329f"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.512839 4644 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nbzsd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.512926 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" podUID="48ff7085-6976-40a6-a50f-b1d6cbbd90aa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.525555 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8e8c90bf2087e6eb0571206714875bf9665ad78f59673a6642c141dabbc03879"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.525610 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5828c9405107087c756af0c4a5b16af2e7ed2d6ac2743ec974ef8e485bf8a472"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.542533 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" event={"ID":"aa2f8af2-85ef-4b5d-a95a-194c6a05a501","Type":"ContainerStarted","Data":"2f331a753711e61947e987b07ed7ded0d7923fca166fea98bcfcd90f713f104b"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.543608 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.545551 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.045530156 +0000 UTC m=+151.085587911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.563623 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" event={"ID":"f90f3d6a-19c8-4559-88d7-983dc38398da","Type":"ContainerStarted","Data":"66bebd5ba12f837e49b2dbe7445a7fd31bbed203bfa297260c977324da209e4f"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.565065 4644 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-65hsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.565153 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" podUID="f90f3d6a-19c8-4559-88d7-983dc38398da" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.578888 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d867f3f1c50befbfa9f288920c2809174177f0117fa2540fb283ffff28ecd2bd"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.578981 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f3f30288dc3e1e6ec289eab9e1d5c6a61d80b56887173533915e39ffef367e26"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.579255 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.589691 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tx5v9" podStartSLOduration=129.589670032 podStartE2EDuration="2m9.589670032s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:00.589259434 +0000 UTC m=+150.629317189" watchObservedRunningTime="2026-02-04 08:44:00.589670032 +0000 UTC m=+150.629727787" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.618599 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" event={"ID":"1b4f7d75-0785-47c5-b13c-921a4c98781a","Type":"ContainerStarted","Data":"bf240da677976f88639d35a6bda7d892aa7c54680531334384fd797903bd6564"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.638006 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" event={"ID":"dabefcba-95c1-4c47-b1aa-3265fe1fe046","Type":"ContainerStarted","Data":"8a73da61d529661aef50e4d8eb4a73efc90603cbec1a9810fdf6b4e7e6513331"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.645355 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.646533 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.146513204 +0000 UTC m=+151.186571039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.673907 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" event={"ID":"c39fb4f7-59e5-4d60-9fcb-bdc5744886c0","Type":"ContainerStarted","Data":"ba3114f0f651b7f0ece26b292509f10e4b210c297026696e64b8c317495c2517"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.677628 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" event={"ID":"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c","Type":"ContainerStarted","Data":"23e31ac2682616e57d2c0ca3938d057f40052fea7bad6a7f6be7dee72651a923"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.678003 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.691192 4644 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bgkbq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.691252 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.699260 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" event={"ID":"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f","Type":"ContainerStarted","Data":"730157d3e7714f973b974a2faea1af7aecc85fe0cec69ba4da1e5b86cbc27199"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.699319 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" event={"ID":"814f95d8-7d8e-4e74-93c1-9f372d7fdc6f","Type":"ContainerStarted","Data":"faf3c43542b55522798d70402713c2a3ab580a90c124100f23e4090dece337d2"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.717132 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ac7cb7547c67a5efb7529f74f6db543a6df5c4e19ac35ebd17dbd2fe3c14400e"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.717195 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1840f2c612e1a77f1d2182c4d654ee2347300e6cc122cd5dc3da2cd608e24b2a"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.723758 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" event={"ID":"1ddce301-8043-4223-ad17-a83c4bae252e","Type":"ContainerStarted","Data":"d544f1ba656b4a47674d8b604cd6e3ec4c10df84c05715cfdad2bd2763f8ef8d"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.746381 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.747654 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.247634804 +0000 UTC m=+151.287692559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.764675 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" event={"ID":"eb091d64-a21d-4e6b-82f3-96328d665c91","Type":"ContainerStarted","Data":"49860ef467bf20e603feeca4edee369486ce792276fd77dbd782fcf2347876fe"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.765367 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" event={"ID":"eb091d64-a21d-4e6b-82f3-96328d665c91","Type":"ContainerStarted","Data":"93f0263c8e49ce059b18eda1efd006cfa536abbad02ad84dad8b5876d8bb86cd"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.766150 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.779552 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9n8df" podStartSLOduration=129.779536587 podStartE2EDuration="2m9.779536587s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:00.778935064 +0000 UTC m=+150.818992809" watchObservedRunningTime="2026-02-04 08:44:00.779536587 +0000 UTC m=+150.819594342" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.784423 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" event={"ID":"b015e246-f316-443a-adc9-56b69f929ae8","Type":"ContainerStarted","Data":"d9713cf3f188385e52538fa9d34e36e235cf370c97403e6bfa54418d63675081"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.814852 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" event={"ID":"e0ab8544-eee1-4ece-aecc-09ae3a228c3c","Type":"ContainerStarted","Data":"87aa1ec8e090debb40133f094179b1ea175eebc84f233c699f914c3c20dd4dac"} Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.849700 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.859668 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.359647961 +0000 UTC m=+151.399705716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.885212 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p8qql" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.892082 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" podStartSLOduration=129.892060414 podStartE2EDuration="2m9.892060414s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:00.89137266 +0000 UTC m=+150.931430415" watchObservedRunningTime="2026-02-04 08:44:00.892060414 +0000 UTC m=+150.932118169" Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.951312 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:00 crc kubenswrapper[4644]: E0204 08:44:00.953799 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.453754516 +0000 UTC m=+151.493812271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:00 crc kubenswrapper[4644]: I0204 08:44:00.971066 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m7c4t" podStartSLOduration=129.971046616 podStartE2EDuration="2m9.971046616s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:00.923175162 +0000 UTC m=+150.963232907" watchObservedRunningTime="2026-02-04 08:44:00.971046616 +0000 UTC m=+151.011104371" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.034737 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d7hxw" podStartSLOduration=130.034712178 podStartE2EDuration="2m10.034712178s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.026275793 +0000 UTC m=+151.066333548" watchObservedRunningTime="2026-02-04 08:44:01.034712178 +0000 UTC m=+151.074769933" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.056202 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.056612 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.556598153 +0000 UTC m=+151.596655908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.082257 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gbncm" podStartSLOduration=130.082238525 podStartE2EDuration="2m10.082238525s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.082112953 +0000 UTC m=+151.122170708" watchObservedRunningTime="2026-02-04 08:44:01.082238525 +0000 UTC m=+151.122296280" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.082682 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-spmn6" podStartSLOduration=130.082678414 podStartE2EDuration="2m10.082678414s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.050705591 +0000 UTC m=+151.090763346" watchObservedRunningTime="2026-02-04 08:44:01.082678414 +0000 UTC m=+151.122736169" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.157170 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.157786 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.657767974 +0000 UTC m=+151.697825729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.164895 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" podStartSLOduration=130.164857082 podStartE2EDuration="2m10.164857082s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.163674137 +0000 UTC m=+151.203731892" watchObservedRunningTime="2026-02-04 08:44:01.164857082 +0000 UTC m=+151.204914837" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.167146 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" podStartSLOduration=130.16713534 podStartE2EDuration="2m10.16713534s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.122950601 +0000 UTC m=+151.163008346" watchObservedRunningTime="2026-02-04 08:44:01.16713534 +0000 UTC m=+151.207193105" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.232102 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mksqz" podStartSLOduration=130.232079499 podStartE2EDuration="2m10.232079499s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.201752588 +0000 UTC m=+151.241810343" watchObservedRunningTime="2026-02-04 08:44:01.232079499 +0000 UTC m=+151.272137264" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.233010 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ffncs" podStartSLOduration=130.233004557 podStartE2EDuration="2m10.233004557s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.231070047 +0000 UTC m=+151.271127802" watchObservedRunningTime="2026-02-04 08:44:01.233004557 +0000 UTC m=+151.273062312" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.257717 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" podStartSLOduration=130.257699831 podStartE2EDuration="2m10.257699831s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.255679628 +0000 UTC m=+151.295737383" watchObservedRunningTime="2026-02-04 08:44:01.257699831 +0000 UTC m=+151.297757586" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.259554 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.260087 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.76007103 +0000 UTC m=+151.800128785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.361026 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.361300 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.861267752 +0000 UTC m=+151.901325507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.361565 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.361938 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.861926196 +0000 UTC m=+151.901983951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.380736 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:01 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:01 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:01 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.381107 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.463108 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.463316 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.963278491 +0000 UTC m=+152.003336246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.463387 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.463688 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:01.963675339 +0000 UTC m=+152.003733094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.564559 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.564982 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.064965374 +0000 UTC m=+152.105023129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.666564 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.667081 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.167069475 +0000 UTC m=+152.207127230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.768179 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.768414 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.268384179 +0000 UTC m=+152.308441934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.768819 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.769169 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.269160685 +0000 UTC m=+152.309218450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.820396 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" event={"ID":"0a6c3ee4-ea6a-47ac-af16-983d3265fdad","Type":"ContainerStarted","Data":"f6cf01d9240e1440db6be55fbb759875085d92437093502d546e330968d2d539"} Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.823366 4644 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bgkbq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.823407 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.824256 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6qfw4" event={"ID":"48d56a70-84c8-41cb-ba71-e74768d42190","Type":"ContainerStarted","Data":"64d8258518442734e59742a8668c6c3f3405e27cad5480be19ebffa816bf9606"} Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.834139 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nbzsd" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.869839 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.871179 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.371147934 +0000 UTC m=+152.411205689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.871810 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6qfw4" podStartSLOduration=10.871786737 podStartE2EDuration="10.871786737s" podCreationTimestamp="2026-02-04 08:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:01.854004508 +0000 UTC m=+151.894062333" watchObservedRunningTime="2026-02-04 08:44:01.871786737 +0000 UTC m=+151.911844492" Feb 04 08:44:01 crc kubenswrapper[4644]: I0204 08:44:01.975179 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:01 crc kubenswrapper[4644]: E0204 08:44:01.976264 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.476245137 +0000 UTC m=+152.516302892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.076804 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.077963 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.57793709 +0000 UTC m=+152.617994845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.178577 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.179003 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.678984539 +0000 UTC m=+152.719042294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.279616 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.280049 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.780029049 +0000 UTC m=+152.820086804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.380815 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.381173 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.881155229 +0000 UTC m=+152.921212984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.383353 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:02 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:02 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:02 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.383397 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.482124 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.482355 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.98230572 +0000 UTC m=+153.022363475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.482702 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.483047 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:02.983038525 +0000 UTC m=+153.023096280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.583592 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.583761 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.083730227 +0000 UTC m=+153.123787982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.584119 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.584501 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.084493294 +0000 UTC m=+153.124551049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.685250 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.685955 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.185939781 +0000 UTC m=+153.225997536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.737730 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7zgx"] Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.738861 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.742626 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.746683 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7zgx"] Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.787534 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.787711 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.787734 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.787753 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9sn2\" (UniqueName: \"kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.788068 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.288056352 +0000 UTC m=+153.328114097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.823260 4644 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-65hsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.823320 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" podUID="f90f3d6a-19c8-4559-88d7-983dc38398da" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.833725 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" event={"ID":"0a6c3ee4-ea6a-47ac-af16-983d3265fdad","Type":"ContainerStarted","Data":"66f305a5f8dc8b15e478904b96c2f9208efb78acc9e498aad6b06c7af80b46d3"} Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.834223 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6qfw4" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.888389 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.888527 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.388507949 +0000 UTC m=+153.428565704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.888679 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.888897 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.388890227 +0000 UTC m=+153.428947982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.889639 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.889763 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9sn2\" (UniqueName: \"kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.889796 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.890123 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.890401 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.925375 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8qh5"] Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.926294 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.931170 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9sn2\" (UniqueName: \"kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2\") pod \"certified-operators-g7zgx\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.932454 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.966179 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8qh5"] Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.993069 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.993315 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnn6h\" (UniqueName: \"kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.993381 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:02 crc kubenswrapper[4644]: I0204 08:44:02.993402 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:02 crc kubenswrapper[4644]: E0204 08:44:02.993499 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.49348443 +0000 UTC m=+153.533542185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.052384 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.094471 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.094568 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnn6h\" (UniqueName: \"kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.094653 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.094682 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.094882 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.594862536 +0000 UTC m=+153.634920291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.095406 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.095549 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.150952 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.152142 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.161375 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnn6h\" (UniqueName: \"kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h\") pod \"community-operators-d8qh5\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.195790 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.196013 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbj2\" (UniqueName: \"kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.196042 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.196062 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.196171 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.69615501 +0000 UTC m=+153.736212765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.239668 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.270003 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.298281 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blbj2\" (UniqueName: \"kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.298337 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.298357 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.298389 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.298809 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.798796973 +0000 UTC m=+153.838854728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.299457 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.299661 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.401158 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.401974 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:03.901956245 +0000 UTC m=+153.942014000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.405593 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.411026 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.409270 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:03 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:03 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:03 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.422938 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.408486 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blbj2\" (UniqueName: \"kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2\") pod \"certified-operators-s99cb\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.453627 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.474255 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.489703 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.502558 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.502609 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.502650 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6756\" (UniqueName: \"kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.502722 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.502998 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.002984874 +0000 UTC m=+154.043042619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.603445 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.603757 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.603841 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.603883 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6756\" (UniqueName: \"kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.605547 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.105509754 +0000 UTC m=+154.145567509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.607044 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.610562 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.612470 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.623863 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.637178 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.637217 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.666573 4644 patch_prober.go:28] interesting pod/console-f9d7485db-2mwnq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.666638 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2mwnq" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.675573 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6756\" (UniqueName: \"kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756\") pod \"community-operators-2qxgg\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.704678 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.706613 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.206599674 +0000 UTC m=+154.246657429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.719755 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.720459 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.726825 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.727055 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.728049 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.728088 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.728590 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.728614 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.750539 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.782899 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.808785 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.809735 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.809843 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.810033 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.310017022 +0000 UTC m=+154.350074777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.860793 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.862091 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.890591 4644 patch_prober.go:28] interesting pod/apiserver-76f77b778f-w4cjj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]log ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]etcd ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/generic-apiserver-start-informers ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/max-in-flight-filter ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 04 08:44:03 crc kubenswrapper[4644]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 04 08:44:03 crc kubenswrapper[4644]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/project.openshift.io-projectcache ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-startinformers ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 04 08:44:03 crc kubenswrapper[4644]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 04 08:44:03 crc kubenswrapper[4644]: livez check failed Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.890656 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" podUID="e0ab8544-eee1-4ece-aecc-09ae3a228c3c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.901567 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.902881 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" event={"ID":"0a6c3ee4-ea6a-47ac-af16-983d3265fdad","Type":"ContainerStarted","Data":"1dea5863a2cd896069c7c0874350566091c4e4859f1c5bb325eb46d5fa55bc5b"} Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.916124 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.916304 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.916361 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.917571 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:03 crc kubenswrapper[4644]: E0204 08:44:03.917879 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.417860653 +0000 UTC m=+154.457918408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:03 crc kubenswrapper[4644]: I0204 08:44:03.998207 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.017423 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.017902 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.51786446 +0000 UTC m=+154.557922225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.018293 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.021484 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.521470484 +0000 UTC m=+154.561528229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.080739 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.119785 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.120638 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.620438931 +0000 UTC m=+154.660496686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.221442 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.221791 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.721779316 +0000 UTC m=+154.761837071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.282573 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7zgx"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.322901 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.323234 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.823215033 +0000 UTC m=+154.863272788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.382632 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8qh5"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.386097 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:04 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:04 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:04 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.386167 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.431190 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.431519 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:04.931507853 +0000 UTC m=+154.971565608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.502516 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.531912 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.532355 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:05.032316277 +0000 UTC m=+155.072374032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: W0204 08:44:04.569895 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dbe6697_c0eb_45cc_9e46_23dbf1bb1bca.slice/crio-e2220f79c4ad20f9823b88656704150871946c93f61d8394e019caeb7ffa78cf WatchSource:0}: Error finding container e2220f79c4ad20f9823b88656704150871946c93f61d8394e019caeb7ffa78cf: Status 404 returned error can't find the container with id e2220f79c4ad20f9823b88656704150871946c93f61d8394e019caeb7ffa78cf Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.585929 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.635295 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.635665 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:05.135650594 +0000 UTC m=+155.175708349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.724463 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.736179 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.736766 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 08:44:05.236748374 +0000 UTC m=+155.276806129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.837690 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:04 crc kubenswrapper[4644]: E0204 08:44:04.838043 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 08:44:05.338024578 +0000 UTC m=+155.378082333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9zrhj" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.866788 4644 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.895395 4644 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-04T08:44:04.866826806Z","Handler":null,"Name":""} Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.898641 4644 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.898672 4644 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.913182 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerStarted","Data":"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8"} Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.913437 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerStarted","Data":"018f1e6dd29eb4dfc9561d16fd3d8d7243672a6440e09471961a400614a2569a"} Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.918007 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r5qf8"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.921154 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.922589 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.940024 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.945526 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.952023 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5qf8"] Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.973407 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.976754 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-65hsz" Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.989836 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" event={"ID":"0a6c3ee4-ea6a-47ac-af16-983d3265fdad","Type":"ContainerStarted","Data":"87d71fb3ca8509e7bfdc41c1fadfed60201af4911f1275f26debfdf249ab6ff2"} Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.995225 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerID="6ca1d7bd92a38f81eef5a9af25954dc266a6f587c882be92e2081daa2d0cbefd" exitCode=0 Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.995288 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerDied","Data":"6ca1d7bd92a38f81eef5a9af25954dc266a6f587c882be92e2081daa2d0cbefd"} Feb 04 08:44:04 crc kubenswrapper[4644]: I0204 08:44:04.995307 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerStarted","Data":"b279dc5e2ebe75ddbaf718adf9e9a387f4573db944f6325199183491290265eb"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.003371 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerStarted","Data":"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.003472 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerStarted","Data":"e2220f79c4ad20f9823b88656704150871946c93f61d8394e019caeb7ffa78cf"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.014061 4644 generic.go:334] "Generic (PLEG): container finished" podID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerID="ec8a89dc8cf5de2bb8235f9b5e24ef38b1c0b2083c7fa631c55f656125a01535" exitCode=0 Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.014125 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerDied","Data":"ec8a89dc8cf5de2bb8235f9b5e24ef38b1c0b2083c7fa631c55f656125a01535"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.014150 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerStarted","Data":"51c144d548573cd0f00112c498949919227bb0d3b69860d89433bdc80afc930c"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.015425 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h6l9p" podStartSLOduration=14.015409553 podStartE2EDuration="14.015409553s" podCreationTimestamp="2026-02-04 08:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:05.013832931 +0000 UTC m=+155.053890686" watchObservedRunningTime="2026-02-04 08:44:05.015409553 +0000 UTC m=+155.055467308" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.020661 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0917f936-d7c8-4c5f-93b3-15948a332909","Type":"ContainerStarted","Data":"dd05f06efdcff1c4429314252f19e5c33d1081f6bfcee61b65a739cda4eb3158"} Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.033476 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rb5ld" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.053846 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.053945 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cln8h\" (UniqueName: \"kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.053966 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.053988 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.082954 4644 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.083000 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.156995 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cln8h\" (UniqueName: \"kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.157042 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.157172 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.158778 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.159836 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.177503 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9zrhj\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.208971 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.214356 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cln8h\" (UniqueName: \"kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h\") pod \"redhat-marketplace-r5qf8\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.250154 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.250854 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.251536 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.256796 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.276509 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.279189 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.296029 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.326940 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.330296 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.344022 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.362897 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.363163 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.376079 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.382586 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:05 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:05 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:05 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.382627 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.464073 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.464148 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.464289 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.464223 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.465394 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.465420 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vv7c\" (UniqueName: \"kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.493016 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.556271 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.556335 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.582539 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.582636 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.582653 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vv7c\" (UniqueName: \"kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.583419 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.583625 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.583756 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.611143 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vv7c\" (UniqueName: \"kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c\") pod \"redhat-marketplace-sltvp\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.656811 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.770430 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.857574 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5qf8"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.920237 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rpn5"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.925948 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.928580 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.932947 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rpn5"] Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.998034 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.998374 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:05 crc kubenswrapper[4644]: I0204 08:44:05.998525 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwdg\" (UniqueName: \"kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.033470 4644 generic.go:334] "Generic (PLEG): container finished" podID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerID="2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6" exitCode=0 Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.033537 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerDied","Data":"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6"} Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.057336 4644 generic.go:334] "Generic (PLEG): container finished" podID="0917f936-d7c8-4c5f-93b3-15948a332909" containerID="f3c63442d049ef3090bb72c71a51ea1e5e432f10efd3dedeae5e5e05ec6df795" exitCode=0 Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.057408 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0917f936-d7c8-4c5f-93b3-15948a332909","Type":"ContainerDied","Data":"f3c63442d049ef3090bb72c71a51ea1e5e432f10efd3dedeae5e5e05ec6df795"} Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.089689 4644 generic.go:334] "Generic (PLEG): container finished" podID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerID="3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8" exitCode=0 Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.089778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerDied","Data":"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8"} Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.099661 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwdg\" (UniqueName: \"kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.099755 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.099785 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.100172 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.100684 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.112921 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.119496 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerStarted","Data":"32edb925bcd11bf4d4e19103b5b08cc3086ab28cb8e1d9fbc5314a710887180f"} Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.128047 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" event={"ID":"c06497d4-3e16-42df-9c4a-657c3db32510","Type":"ContainerStarted","Data":"81c06df3fc110762080c3fe240d9c0946daad0a9ee59eb74406e93f343c4f227"} Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.130359 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.137390 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwdg\" (UniqueName: \"kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg\") pod \"redhat-operators-8rpn5\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.249688 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.314213 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.315982 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.323851 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.387885 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:06 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:06 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:06 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.387947 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.403124 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.403194 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7kd9\" (UniqueName: \"kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.403238 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.504582 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.504662 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7kd9\" (UniqueName: \"kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.504705 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.505896 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.506205 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.545917 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7kd9\" (UniqueName: \"kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9\") pod \"redhat-operators-qf6hm\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.569018 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rpn5"] Feb 04 08:44:06 crc kubenswrapper[4644]: W0204 08:44:06.579878 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c424c9d_56cc_42b6_95b2_c23ff3ed8846.slice/crio-9a05a60cd3cd2cccf7b61ec0fad9285b25d6fe8b4fbe21fcd4870af78f36b277 WatchSource:0}: Error finding container 9a05a60cd3cd2cccf7b61ec0fad9285b25d6fe8b4fbe21fcd4870af78f36b277: Status 404 returned error can't find the container with id 9a05a60cd3cd2cccf7b61ec0fad9285b25d6fe8b4fbe21fcd4870af78f36b277 Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.652653 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:44:06 crc kubenswrapper[4644]: I0204 08:44:06.668401 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.136749 4644 generic.go:334] "Generic (PLEG): container finished" podID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerID="ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161" exitCode=0 Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.137021 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerDied","Data":"ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.137048 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerStarted","Data":"9a05a60cd3cd2cccf7b61ec0fad9285b25d6fe8b4fbe21fcd4870af78f36b277"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.153742 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.163715 4644 generic.go:334] "Generic (PLEG): container finished" podID="aa2f8af2-85ef-4b5d-a95a-194c6a05a501" containerID="2f331a753711e61947e987b07ed7ded0d7923fca166fea98bcfcd90f713f104b" exitCode=0 Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.163805 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" event={"ID":"aa2f8af2-85ef-4b5d-a95a-194c6a05a501","Type":"ContainerDied","Data":"2f331a753711e61947e987b07ed7ded0d7923fca166fea98bcfcd90f713f104b"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.172258 4644 generic.go:334] "Generic (PLEG): container finished" podID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerID="7eec73653c7630c9b93779e74d3f619d7bbb15afda1f992be0bb5da6116236e5" exitCode=0 Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.172402 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerDied","Data":"7eec73653c7630c9b93779e74d3f619d7bbb15afda1f992be0bb5da6116236e5"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.177170 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" event={"ID":"c06497d4-3e16-42df-9c4a-657c3db32510","Type":"ContainerStarted","Data":"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.177838 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.187562 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90db3e94-5d64-47a6-862f-ab1e17dd62e0","Type":"ContainerStarted","Data":"650c6d03f9ae4ddbe59d975685cb10d19f6652950287b8894804080eb567d1d0"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.187605 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90db3e94-5d64-47a6-862f-ab1e17dd62e0","Type":"ContainerStarted","Data":"2871e3c28dbb36ae81841669f2efdbdf515c934e990d2a17ca99e6a1050f63dc"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.190852 4644 generic.go:334] "Generic (PLEG): container finished" podID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerID="856bc80b2b0c2ae90c222280215d3db3134b6a0b69b08a1a93058b1a808698d6" exitCode=0 Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.191405 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerDied","Data":"856bc80b2b0c2ae90c222280215d3db3134b6a0b69b08a1a93058b1a808698d6"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.191428 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerStarted","Data":"aabe1ab29c406bb22c689a161fd37360566cbdf314f828d640f783cac05b0526"} Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.224424 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" podStartSLOduration=136.224399703 podStartE2EDuration="2m16.224399703s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:07.214962946 +0000 UTC m=+157.255020701" watchObservedRunningTime="2026-02-04 08:44:07.224399703 +0000 UTC m=+157.264457458" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.259280 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.259265267 podStartE2EDuration="2.259265267s" podCreationTimestamp="2026-02-04 08:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:07.25798108 +0000 UTC m=+157.298038835" watchObservedRunningTime="2026-02-04 08:44:07.259265267 +0000 UTC m=+157.299323022" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.383395 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:07 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:07 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:07 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.383444 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.587548 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.731919 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access\") pod \"0917f936-d7c8-4c5f-93b3-15948a332909\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.731997 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir\") pod \"0917f936-d7c8-4c5f-93b3-15948a332909\" (UID: \"0917f936-d7c8-4c5f-93b3-15948a332909\") " Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.732168 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0917f936-d7c8-4c5f-93b3-15948a332909" (UID: "0917f936-d7c8-4c5f-93b3-15948a332909"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.733075 4644 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0917f936-d7c8-4c5f-93b3-15948a332909-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.738909 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0917f936-d7c8-4c5f-93b3-15948a332909" (UID: "0917f936-d7c8-4c5f-93b3-15948a332909"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:44:07 crc kubenswrapper[4644]: I0204 08:44:07.841267 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0917f936-d7c8-4c5f-93b3-15948a332909-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.227673 4644 generic.go:334] "Generic (PLEG): container finished" podID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerID="02a36b734c5b61a1e06a181a90e34731f13daeb26e52186dc4dd2a922cf3e1c5" exitCode=0 Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.228012 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerDied","Data":"02a36b734c5b61a1e06a181a90e34731f13daeb26e52186dc4dd2a922cf3e1c5"} Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.228058 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerStarted","Data":"987dc7ee01d5fa2c864af2362f7e85c11e5b8e1a1b1a657d2a2f5b45f96611bc"} Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.238662 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0917f936-d7c8-4c5f-93b3-15948a332909","Type":"ContainerDied","Data":"dd05f06efdcff1c4429314252f19e5c33d1081f6bfcee61b65a739cda4eb3158"} Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.238702 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd05f06efdcff1c4429314252f19e5c33d1081f6bfcee61b65a739cda4eb3158" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.238675 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.260897 4644 generic.go:334] "Generic (PLEG): container finished" podID="90db3e94-5d64-47a6-862f-ab1e17dd62e0" containerID="650c6d03f9ae4ddbe59d975685cb10d19f6652950287b8894804080eb567d1d0" exitCode=0 Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.261337 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90db3e94-5d64-47a6-862f-ab1e17dd62e0","Type":"ContainerDied","Data":"650c6d03f9ae4ddbe59d975685cb10d19f6652950287b8894804080eb567d1d0"} Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.379709 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:08 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:08 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:08 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.379761 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.621520 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.652528 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-w4cjj" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.793736 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.866232 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume\") pod \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.866373 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume\") pod \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.866421 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pt9c\" (UniqueName: \"kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c\") pod \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\" (UID: \"aa2f8af2-85ef-4b5d-a95a-194c6a05a501\") " Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.868339 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa2f8af2-85ef-4b5d-a95a-194c6a05a501" (UID: "aa2f8af2-85ef-4b5d-a95a-194c6a05a501"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.872313 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa2f8af2-85ef-4b5d-a95a-194c6a05a501" (UID: "aa2f8af2-85ef-4b5d-a95a-194c6a05a501"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.874044 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c" (OuterVolumeSpecName: "kube-api-access-2pt9c") pod "aa2f8af2-85ef-4b5d-a95a-194c6a05a501" (UID: "aa2f8af2-85ef-4b5d-a95a-194c6a05a501"). InnerVolumeSpecName "kube-api-access-2pt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.968200 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.968252 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:08 crc kubenswrapper[4644]: I0204 08:44:08.968265 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pt9c\" (UniqueName: \"kubernetes.io/projected/aa2f8af2-85ef-4b5d-a95a-194c6a05a501-kube-api-access-2pt9c\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.343402 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" event={"ID":"aa2f8af2-85ef-4b5d-a95a-194c6a05a501","Type":"ContainerDied","Data":"7d0fe87615fa2d4730f3f0cd338cb38e3afeb898d17d58c902d4e88f3ec2ec70"} Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.343452 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0fe87615fa2d4730f3f0cd338cb38e3afeb898d17d58c902d4e88f3ec2ec70" Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.343538 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf" Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.381203 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:09 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:09 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:09 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.381260 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:09 crc kubenswrapper[4644]: I0204 08:44:09.952710 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.095873 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access\") pod \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.096115 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir\") pod \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\" (UID: \"90db3e94-5d64-47a6-862f-ab1e17dd62e0\") " Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.096412 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "90db3e94-5d64-47a6-862f-ab1e17dd62e0" (UID: "90db3e94-5d64-47a6-862f-ab1e17dd62e0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.107050 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "90db3e94-5d64-47a6-862f-ab1e17dd62e0" (UID: "90db3e94-5d64-47a6-862f-ab1e17dd62e0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.197824 4644 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.197855 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db3e94-5d64-47a6-862f-ab1e17dd62e0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.277758 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6qfw4" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.388987 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:10 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:10 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:10 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.389057 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.443977 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90db3e94-5d64-47a6-862f-ab1e17dd62e0","Type":"ContainerDied","Data":"2871e3c28dbb36ae81841669f2efdbdf515c934e990d2a17ca99e6a1050f63dc"} Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.444017 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2871e3c28dbb36ae81841669f2efdbdf515c934e990d2a17ca99e6a1050f63dc" Feb 04 08:44:10 crc kubenswrapper[4644]: I0204 08:44:10.444105 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 08:44:11 crc kubenswrapper[4644]: I0204 08:44:11.380133 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:11 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:11 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:11 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:11 crc kubenswrapper[4644]: I0204 08:44:11.380709 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:11 crc kubenswrapper[4644]: I0204 08:44:11.431886 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:44:12 crc kubenswrapper[4644]: I0204 08:44:12.385389 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:12 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:12 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:12 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:12 crc kubenswrapper[4644]: I0204 08:44:12.385474 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.379439 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:13 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:13 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:13 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.380241 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.629420 4644 patch_prober.go:28] interesting pod/console-f9d7485db-2mwnq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.629593 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2mwnq" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.727165 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.727207 4644 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcwz9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.727230 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:44:13 crc kubenswrapper[4644]: I0204 08:44:13.727263 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcwz9" podUID="323e297c-2d63-4230-8110-c7d9c9da3538" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 08:44:14 crc kubenswrapper[4644]: I0204 08:44:14.139107 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:44:14 crc kubenswrapper[4644]: I0204 08:44:14.161778 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0c747eb-fe5e-4cad-a021-307cc2ed1ad5-metrics-certs\") pod \"network-metrics-daemon-f6ghp\" (UID: \"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5\") " pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:44:14 crc kubenswrapper[4644]: I0204 08:44:14.279688 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f6ghp" Feb 04 08:44:14 crc kubenswrapper[4644]: I0204 08:44:14.382632 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:14 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:14 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:14 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:14 crc kubenswrapper[4644]: I0204 08:44:14.383117 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:15 crc kubenswrapper[4644]: I0204 08:44:15.187383 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f6ghp"] Feb 04 08:44:15 crc kubenswrapper[4644]: W0204 08:44:15.255731 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c747eb_fe5e_4cad_a021_307cc2ed1ad5.slice/crio-62b7dd6595abfda0634bf65752ed68f1bb76d1ae57a957935eb3208e4d692a44 WatchSource:0}: Error finding container 62b7dd6595abfda0634bf65752ed68f1bb76d1ae57a957935eb3208e4d692a44: Status 404 returned error can't find the container with id 62b7dd6595abfda0634bf65752ed68f1bb76d1ae57a957935eb3208e4d692a44 Feb 04 08:44:15 crc kubenswrapper[4644]: I0204 08:44:15.379463 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:15 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:15 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:15 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:15 crc kubenswrapper[4644]: I0204 08:44:15.379532 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:15 crc kubenswrapper[4644]: I0204 08:44:15.561245 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" event={"ID":"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5","Type":"ContainerStarted","Data":"62b7dd6595abfda0634bf65752ed68f1bb76d1ae57a957935eb3208e4d692a44"} Feb 04 08:44:16 crc kubenswrapper[4644]: I0204 08:44:16.380278 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:16 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:16 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:16 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:16 crc kubenswrapper[4644]: I0204 08:44:16.380620 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:17 crc kubenswrapper[4644]: I0204 08:44:17.378447 4644 patch_prober.go:28] interesting pod/router-default-5444994796-zjdj8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 08:44:17 crc kubenswrapper[4644]: [-]has-synced failed: reason withheld Feb 04 08:44:17 crc kubenswrapper[4644]: [+]process-running ok Feb 04 08:44:17 crc kubenswrapper[4644]: healthz check failed Feb 04 08:44:17 crc kubenswrapper[4644]: I0204 08:44:17.378517 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zjdj8" podUID="7938153c-4023-411c-883b-e0c61b8a955b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:44:17 crc kubenswrapper[4644]: I0204 08:44:17.605070 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" event={"ID":"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5","Type":"ContainerStarted","Data":"97b280a052de9bfbe43f4a9efea567131e86a503a0fb736d17b331758734f53d"} Feb 04 08:44:18 crc kubenswrapper[4644]: I0204 08:44:18.388985 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:44:18 crc kubenswrapper[4644]: I0204 08:44:18.398940 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zjdj8" Feb 04 08:44:23 crc kubenswrapper[4644]: I0204 08:44:23.632999 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:44:23 crc kubenswrapper[4644]: I0204 08:44:23.636731 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:44:23 crc kubenswrapper[4644]: I0204 08:44:23.738842 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zcwz9" Feb 04 08:44:25 crc kubenswrapper[4644]: I0204 08:44:25.258631 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:44:34 crc kubenswrapper[4644]: I0204 08:44:34.870488 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nmbd7" Feb 04 08:44:35 crc kubenswrapper[4644]: I0204 08:44:35.556843 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:44:35 crc kubenswrapper[4644]: I0204 08:44:35.557367 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:44:37 crc kubenswrapper[4644]: I0204 08:44:37.905580 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 08:44:39 crc kubenswrapper[4644]: I0204 08:44:39.250123 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:44:39 crc kubenswrapper[4644]: E0204 08:44:39.289946 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 04 08:44:39 crc kubenswrapper[4644]: E0204 08:44:39.290154 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6756,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2qxgg_openshift-marketplace(0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:39 crc kubenswrapper[4644]: E0204 08:44:39.292052 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2qxgg" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" Feb 04 08:44:41 crc kubenswrapper[4644]: E0204 08:44:41.933913 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2qxgg" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.003584 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.003788 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blbj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s99cb_openshift-marketplace(e6faaab9-25f9-47fc-a713-f4b6bde29cbc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.005031 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s99cb" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.037474 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.037645 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnn6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d8qh5_openshift-marketplace(b95f491a-610d-44ed-ae19-9e5b7ac25f52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:42 crc kubenswrapper[4644]: E0204 08:44:42.040271 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d8qh5" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048187 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 08:44:43 crc kubenswrapper[4644]: E0204 08:44:43.048676 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0917f936-d7c8-4c5f-93b3-15948a332909" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048698 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0917f936-d7c8-4c5f-93b3-15948a332909" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: E0204 08:44:43.048728 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90db3e94-5d64-47a6-862f-ab1e17dd62e0" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048737 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="90db3e94-5d64-47a6-862f-ab1e17dd62e0" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: E0204 08:44:43.048750 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2f8af2-85ef-4b5d-a95a-194c6a05a501" containerName="collect-profiles" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048759 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2f8af2-85ef-4b5d-a95a-194c6a05a501" containerName="collect-profiles" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048908 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2f8af2-85ef-4b5d-a95a-194c6a05a501" containerName="collect-profiles" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048924 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="90db3e94-5d64-47a6-862f-ab1e17dd62e0" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.048936 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="0917f936-d7c8-4c5f-93b3-15948a332909" containerName="pruner" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.049692 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.053369 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.053525 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.059186 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.176312 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.176717 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.277598 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.277646 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.277990 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.297038 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:43 crc kubenswrapper[4644]: I0204 08:44:43.372553 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:48 crc kubenswrapper[4644]: E0204 08:44:48.240744 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 04 08:44:48 crc kubenswrapper[4644]: E0204 08:44:48.241872 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7kd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qf6hm_openshift-marketplace(811802ba-9b9d-4e09-ac7d-7a62eecabb15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:48 crc kubenswrapper[4644]: E0204 08:44:48.243240 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qf6hm" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" Feb 04 08:44:49 crc kubenswrapper[4644]: E0204 08:44:49.289793 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 04 08:44:49 crc kubenswrapper[4644]: E0204 08:44:49.289977 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9sn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g7zgx_openshift-marketplace(b6b3f648-8d29-4f2c-b3f0-cb29c65133bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:49 crc kubenswrapper[4644]: E0204 08:44:49.291177 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g7zgx" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.434123 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.435184 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.439976 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.518295 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.518623 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.518894 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.620129 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.620203 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.620251 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.620399 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.620528 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.639621 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access\") pod \"installer-9-crc\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:49 crc kubenswrapper[4644]: I0204 08:44:49.766438 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.468924 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qf6hm" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.469636 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d8qh5" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.469716 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s99cb" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.469843 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g7zgx" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.541242 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.541414 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmwdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8rpn5_openshift-marketplace(8c424c9d-56cc-42b6-95b2-c23ff3ed8846): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.542869 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8rpn5" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.825010 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8rpn5" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.879601 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.879762 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cln8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r5qf8_openshift-marketplace(55e01c73-5587-45e5-9a8f-47fedc43d340): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.881113 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r5qf8" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" Feb 04 08:44:50 crc kubenswrapper[4644]: I0204 08:44:50.903846 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 08:44:50 crc kubenswrapper[4644]: I0204 08:44:50.954436 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.964767 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.964931 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vv7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sltvp_openshift-marketplace(b0dd29e1-253e-48f5-9e1f-df4c9493fcbc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 08:44:50 crc kubenswrapper[4644]: E0204 08:44:50.966651 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sltvp" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.840004 4644 generic.go:334] "Generic (PLEG): container finished" podID="cc5bd17c-911e-4bae-9326-5d18f8e928ca" containerID="a38775b7e1d1f9ea377582a4fe9a7ed812a34c53c047c3cdf87c976f76ff5095" exitCode=0 Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.840413 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc5bd17c-911e-4bae-9326-5d18f8e928ca","Type":"ContainerDied","Data":"a38775b7e1d1f9ea377582a4fe9a7ed812a34c53c047c3cdf87c976f76ff5095"} Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.840476 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc5bd17c-911e-4bae-9326-5d18f8e928ca","Type":"ContainerStarted","Data":"193a1415838294d824e646a4c9813ada42a9c3fb5f19a695add02297839c526b"} Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.847728 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"967f5ee0-e078-4ed0-83ae-7147dcd7192e","Type":"ContainerStarted","Data":"25851de601f33307b0e8432c78c0f8cf8e91b6d98fb092ee8b97576d713453ea"} Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.847805 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"967f5ee0-e078-4ed0-83ae-7147dcd7192e","Type":"ContainerStarted","Data":"e7356a07019162a3402cc8912972b9b13b74c59b028a84ebab95f985671d687f"} Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.859539 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f6ghp" event={"ID":"b0c747eb-fe5e-4cad-a021-307cc2ed1ad5","Type":"ContainerStarted","Data":"7a7be0b6a137f858712439e60d8a0ee6848ccea7aa1261c97611b92ccfb80f75"} Feb 04 08:44:51 crc kubenswrapper[4644]: E0204 08:44:51.863075 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r5qf8" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" Feb 04 08:44:51 crc kubenswrapper[4644]: E0204 08:44:51.863467 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sltvp" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.881251 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.8812298 podStartE2EDuration="2.8812298s" podCreationTimestamp="2026-02-04 08:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:51.873033627 +0000 UTC m=+201.913091392" watchObservedRunningTime="2026-02-04 08:44:51.8812298 +0000 UTC m=+201.921287565" Feb 04 08:44:51 crc kubenswrapper[4644]: I0204 08:44:51.912849 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-f6ghp" podStartSLOduration=180.912831307 podStartE2EDuration="3m0.912831307s" podCreationTimestamp="2026-02-04 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:44:51.912285563 +0000 UTC m=+201.952343328" watchObservedRunningTime="2026-02-04 08:44:51.912831307 +0000 UTC m=+201.952889062" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.253204 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.368281 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir\") pod \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.368402 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc5bd17c-911e-4bae-9326-5d18f8e928ca" (UID: "cc5bd17c-911e-4bae-9326-5d18f8e928ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.368434 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access\") pod \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\" (UID: \"cc5bd17c-911e-4bae-9326-5d18f8e928ca\") " Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.368660 4644 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.375571 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc5bd17c-911e-4bae-9326-5d18f8e928ca" (UID: "cc5bd17c-911e-4bae-9326-5d18f8e928ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.469723 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc5bd17c-911e-4bae-9326-5d18f8e928ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.924588 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cc5bd17c-911e-4bae-9326-5d18f8e928ca","Type":"ContainerDied","Data":"193a1415838294d824e646a4c9813ada42a9c3fb5f19a695add02297839c526b"} Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.924640 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="193a1415838294d824e646a4c9813ada42a9c3fb5f19a695add02297839c526b" Feb 04 08:44:53 crc kubenswrapper[4644]: I0204 08:44:53.924713 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 08:44:56 crc kubenswrapper[4644]: I0204 08:44:56.940713 4644 generic.go:334] "Generic (PLEG): container finished" podID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerID="b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7" exitCode=0 Feb 04 08:44:56 crc kubenswrapper[4644]: I0204 08:44:56.940801 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerDied","Data":"b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7"} Feb 04 08:44:57 crc kubenswrapper[4644]: I0204 08:44:57.948979 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerStarted","Data":"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a"} Feb 04 08:44:57 crc kubenswrapper[4644]: I0204 08:44:57.969856 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2qxgg" podStartSLOduration=2.600460559 podStartE2EDuration="54.969831896s" podCreationTimestamp="2026-02-04 08:44:03 +0000 UTC" firstStartedPulling="2026-02-04 08:44:05.010754317 +0000 UTC m=+155.050812072" lastFinishedPulling="2026-02-04 08:44:57.380125644 +0000 UTC m=+207.420183409" observedRunningTime="2026-02-04 08:44:57.967067011 +0000 UTC m=+208.007124766" watchObservedRunningTime="2026-02-04 08:44:57.969831896 +0000 UTC m=+208.009889661" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.139490 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9"] Feb 04 08:45:00 crc kubenswrapper[4644]: E0204 08:45:00.140724 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5bd17c-911e-4bae-9326-5d18f8e928ca" containerName="pruner" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.140992 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5bd17c-911e-4bae-9326-5d18f8e928ca" containerName="pruner" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.141125 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5bd17c-911e-4bae-9326-5d18f8e928ca" containerName="pruner" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.141651 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.143287 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.143544 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.152049 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9"] Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.305440 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d284\" (UniqueName: \"kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.305534 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.305589 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.407212 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.407349 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.407478 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d284\" (UniqueName: \"kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.408348 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.422237 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d284\" (UniqueName: \"kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.427022 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume\") pod \"collect-profiles-29503245-l89h9\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:00 crc kubenswrapper[4644]: I0204 08:45:00.458409 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:01 crc kubenswrapper[4644]: I0204 08:45:01.132402 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9"] Feb 04 08:45:01 crc kubenswrapper[4644]: I0204 08:45:01.976200 4644 generic.go:334] "Generic (PLEG): container finished" podID="b231d551-35a7-406f-b661-914bad0ecec5" containerID="32a3e9e2294c1c0230ce715cd5aee2238ccf3367f57b06e4cba669b1652df0cd" exitCode=0 Feb 04 08:45:01 crc kubenswrapper[4644]: I0204 08:45:01.976274 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" event={"ID":"b231d551-35a7-406f-b661-914bad0ecec5","Type":"ContainerDied","Data":"32a3e9e2294c1c0230ce715cd5aee2238ccf3367f57b06e4cba669b1652df0cd"} Feb 04 08:45:01 crc kubenswrapper[4644]: I0204 08:45:01.976525 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" event={"ID":"b231d551-35a7-406f-b661-914bad0ecec5","Type":"ContainerStarted","Data":"39a8d4362ba17ddee9a6c5075cbb8e62621f112c59378cd0667e510177815d7f"} Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.219146 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.347376 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume\") pod \"b231d551-35a7-406f-b661-914bad0ecec5\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.347479 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume\") pod \"b231d551-35a7-406f-b661-914bad0ecec5\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.347601 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d284\" (UniqueName: \"kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284\") pod \"b231d551-35a7-406f-b661-914bad0ecec5\" (UID: \"b231d551-35a7-406f-b661-914bad0ecec5\") " Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.348194 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume" (OuterVolumeSpecName: "config-volume") pod "b231d551-35a7-406f-b661-914bad0ecec5" (UID: "b231d551-35a7-406f-b661-914bad0ecec5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.353531 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b231d551-35a7-406f-b661-914bad0ecec5" (UID: "b231d551-35a7-406f-b661-914bad0ecec5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.353551 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284" (OuterVolumeSpecName: "kube-api-access-6d284") pod "b231d551-35a7-406f-b661-914bad0ecec5" (UID: "b231d551-35a7-406f-b661-914bad0ecec5"). InnerVolumeSpecName "kube-api-access-6d284". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.449464 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d284\" (UniqueName: \"kubernetes.io/projected/b231d551-35a7-406f-b661-914bad0ecec5-kube-api-access-6d284\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.449503 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b231d551-35a7-406f-b661-914bad0ecec5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.449512 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b231d551-35a7-406f-b661-914bad0ecec5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.751364 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.751422 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.881908 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.988776 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" event={"ID":"b231d551-35a7-406f-b661-914bad0ecec5","Type":"ContainerDied","Data":"39a8d4362ba17ddee9a6c5075cbb8e62621f112c59378cd0667e510177815d7f"} Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.988822 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a8d4362ba17ddee9a6c5075cbb8e62621f112c59378cd0667e510177815d7f" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.989657 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9" Feb 04 08:45:03 crc kubenswrapper[4644]: I0204 08:45:03.992289 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerStarted","Data":"4ed61e9d7d02ed5864450dce1dc8c101a53e53f57663636a91bfe0e8b4b8fa45"} Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.044877 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.306105 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" podUID="469892a0-464b-45d5-8152-53498212b9ac" containerName="oauth-openshift" containerID="cri-o://2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5" gracePeriod=15 Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.675725 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.768943 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769005 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769041 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769074 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769113 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769225 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769252 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769316 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769354 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769387 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769441 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769478 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5b2k\" (UniqueName: \"kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769515 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.769551 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs\") pod \"469892a0-464b-45d5-8152-53498212b9ac\" (UID: \"469892a0-464b-45d5-8152-53498212b9ac\") " Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.770374 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.770796 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.771110 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.771437 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.772120 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.777087 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.777722 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.786629 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k" (OuterVolumeSpecName: "kube-api-access-p5b2k") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "kube-api-access-p5b2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.787448 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.787822 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.789710 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.791254 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.791539 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.795168 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "469892a0-464b-45d5-8152-53498212b9ac" (UID: "469892a0-464b-45d5-8152-53498212b9ac"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871116 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871149 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871161 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871206 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871218 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871230 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871240 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871252 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871285 4644 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/469892a0-464b-45d5-8152-53498212b9ac-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871298 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871308 4644 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871319 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871366 4644 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/469892a0-464b-45d5-8152-53498212b9ac-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:04 crc kubenswrapper[4644]: I0204 08:45:04.871378 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5b2k\" (UniqueName: \"kubernetes.io/projected/469892a0-464b-45d5-8152-53498212b9ac-kube-api-access-p5b2k\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.000537 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerID="497903666e07ca6a5db97517474fa891b04cbf658d8ba3bf3447c384a57b28b7" exitCode=0 Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.000619 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerDied","Data":"497903666e07ca6a5db97517474fa891b04cbf658d8ba3bf3447c384a57b28b7"} Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.001795 4644 generic.go:334] "Generic (PLEG): container finished" podID="469892a0-464b-45d5-8152-53498212b9ac" containerID="2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5" exitCode=0 Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.001841 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.001876 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" event={"ID":"469892a0-464b-45d5-8152-53498212b9ac","Type":"ContainerDied","Data":"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5"} Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.001909 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fxwlv" event={"ID":"469892a0-464b-45d5-8152-53498212b9ac","Type":"ContainerDied","Data":"4762bf36103fb52fb6ef08bfd6adfa95413780f9ffc8ef13aa3ad4fbc28242ef"} Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.001953 4644 scope.go:117] "RemoveContainer" containerID="2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.005418 4644 generic.go:334] "Generic (PLEG): container finished" podID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerID="4ed61e9d7d02ed5864450dce1dc8c101a53e53f57663636a91bfe0e8b4b8fa45" exitCode=0 Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.006200 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerDied","Data":"4ed61e9d7d02ed5864450dce1dc8c101a53e53f57663636a91bfe0e8b4b8fa45"} Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.020815 4644 scope.go:117] "RemoveContainer" containerID="2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5" Feb 04 08:45:05 crc kubenswrapper[4644]: E0204 08:45:05.022376 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5\": container with ID starting with 2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5 not found: ID does not exist" containerID="2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.022598 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5"} err="failed to get container status \"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5\": rpc error: code = NotFound desc = could not find container \"2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5\": container with ID starting with 2b14848c8b0087b18b5fe37fc5b76e88fc17ba01567dcb106cfb5c8c197340f5 not found: ID does not exist" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.061993 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.066993 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fxwlv"] Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.555275 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.555392 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.555487 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.556438 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 08:45:05 crc kubenswrapper[4644]: I0204 08:45:05.556526 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e" gracePeriod=600 Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.014872 4644 generic.go:334] "Generic (PLEG): container finished" podID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerID="fd1a854ffc0f64b72fd9836aaa825e196d38f7f79dbfe82aae03a20bb8f16284" exitCode=0 Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.014940 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerDied","Data":"fd1a854ffc0f64b72fd9836aaa825e196d38f7f79dbfe82aae03a20bb8f16284"} Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.018795 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e" exitCode=0 Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.018840 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e"} Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.293280 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.293568 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2qxgg" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="registry-server" containerID="cri-o://04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a" gracePeriod=2 Feb 04 08:45:06 crc kubenswrapper[4644]: I0204 08:45:06.669024 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469892a0-464b-45d5-8152-53498212b9ac" path="/var/lib/kubelet/pods/469892a0-464b-45d5-8152-53498212b9ac/volumes" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.029078 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a"} Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.794988 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-8lq9c"] Feb 04 08:45:07 crc kubenswrapper[4644]: E0204 08:45:07.795748 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469892a0-464b-45d5-8152-53498212b9ac" containerName="oauth-openshift" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.795767 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="469892a0-464b-45d5-8152-53498212b9ac" containerName="oauth-openshift" Feb 04 08:45:07 crc kubenswrapper[4644]: E0204 08:45:07.795794 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b231d551-35a7-406f-b661-914bad0ecec5" containerName="collect-profiles" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.795802 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b231d551-35a7-406f-b661-914bad0ecec5" containerName="collect-profiles" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.796038 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b231d551-35a7-406f-b661-914bad0ecec5" containerName="collect-profiles" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.796079 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="469892a0-464b-45d5-8152-53498212b9ac" containerName="oauth-openshift" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.796610 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.811546 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.812648 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.814245 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.818228 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.819858 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.826094 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.826412 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.826509 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.830258 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.834445 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.835545 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.854235 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-8lq9c"] Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.858776 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.859751 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.868537 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.874399 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.877932 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.949958 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950207 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qskrq\" (UniqueName: \"kubernetes.io/projected/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-kube-api-access-qskrq\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950230 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950248 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-policies\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950275 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950292 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950462 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950505 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950530 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950552 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950580 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950605 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950623 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:07 crc kubenswrapper[4644]: I0204 08:45:07.950639 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-dir\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.040504 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerStarted","Data":"1ec83c5682ac8a364107cc4e99993b4106c0dcc01c2c8d67159101b76d369698"} Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.051878 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content\") pod \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052222 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities\") pod \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052276 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6756\" (UniqueName: \"kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756\") pod \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\" (UID: \"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca\") " Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052372 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052415 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052443 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052486 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052517 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052540 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052562 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-dir\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052592 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052615 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qskrq\" (UniqueName: \"kubernetes.io/projected/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-kube-api-access-qskrq\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052637 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052659 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-policies\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052691 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052714 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.052736 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.053495 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.054341 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-dir\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.055027 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities" (OuterVolumeSpecName: "utilities") pod "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" (UID: "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.063200 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.065062 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.072110 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-audit-policies\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.074717 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756" (OuterVolumeSpecName: "kube-api-access-g6756") pod "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" (UID: "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca"). InnerVolumeSpecName "kube-api-access-g6756". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.076593 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.077138 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.081845 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.081895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085051 4644 generic.go:334] "Generic (PLEG): container finished" podID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerID="04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a" exitCode=0 Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085135 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerDied","Data":"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a"} Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085167 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2qxgg" event={"ID":"0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca","Type":"ContainerDied","Data":"e2220f79c4ad20f9823b88656704150871946c93f61d8394e019caeb7ffa78cf"} Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085187 4644 scope.go:117] "RemoveContainer" containerID="04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085463 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2qxgg" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.085746 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.086046 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.087065 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.089788 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qskrq\" (UniqueName: \"kubernetes.io/projected/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-kube-api-access-qskrq\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.093094 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbf5c002-73f4-4e70-bb65-a9b2a7b32609-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-8lq9c\" (UID: \"dbf5c002-73f4-4e70-bb65-a9b2a7b32609\") " pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.097609 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7zgx" podStartSLOduration=4.659297498 podStartE2EDuration="1m6.097581735s" podCreationTimestamp="2026-02-04 08:44:02 +0000 UTC" firstStartedPulling="2026-02-04 08:44:04.99744664 +0000 UTC m=+155.037504385" lastFinishedPulling="2026-02-04 08:45:06.435730867 +0000 UTC m=+216.475788622" observedRunningTime="2026-02-04 08:45:08.077487389 +0000 UTC m=+218.117545144" watchObservedRunningTime="2026-02-04 08:45:08.097581735 +0000 UTC m=+218.137639490" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.116558 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerStarted","Data":"e270362387861a755652f40a597747e3a8eae97058c060413e73db904e8d5ab9"} Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.117812 4644 scope.go:117] "RemoveContainer" containerID="b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.136588 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qf6hm" podStartSLOduration=4.645574481 podStartE2EDuration="1m2.136549723s" podCreationTimestamp="2026-02-04 08:44:06 +0000 UTC" firstStartedPulling="2026-02-04 08:44:08.232399893 +0000 UTC m=+158.272457648" lastFinishedPulling="2026-02-04 08:45:05.723375125 +0000 UTC m=+215.763432890" observedRunningTime="2026-02-04 08:45:08.134575299 +0000 UTC m=+218.174633054" watchObservedRunningTime="2026-02-04 08:45:08.136549723 +0000 UTC m=+218.176607478" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.143899 4644 generic.go:334] "Generic (PLEG): container finished" podID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerID="ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775" exitCode=0 Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.144691 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerDied","Data":"ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775"} Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.152465 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" (UID: "0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.154114 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6756\" (UniqueName: \"kubernetes.io/projected/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-kube-api-access-g6756\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.154133 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.154143 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.169511 4644 scope.go:117] "RemoveContainer" containerID="2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.187319 4644 scope.go:117] "RemoveContainer" containerID="04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a" Feb 04 08:45:08 crc kubenswrapper[4644]: E0204 08:45:08.187780 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a\": container with ID starting with 04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a not found: ID does not exist" containerID="04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.187825 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a"} err="failed to get container status \"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a\": rpc error: code = NotFound desc = could not find container \"04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a\": container with ID starting with 04228c7f7db7ce5dadb502f5fab0705a29ca829d03e8ae2a4266a8f20ac44f5a not found: ID does not exist" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.187858 4644 scope.go:117] "RemoveContainer" containerID="b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7" Feb 04 08:45:08 crc kubenswrapper[4644]: E0204 08:45:08.188475 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7\": container with ID starting with b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7 not found: ID does not exist" containerID="b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.188497 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7"} err="failed to get container status \"b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7\": rpc error: code = NotFound desc = could not find container \"b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7\": container with ID starting with b745a156fbc6043cb3fff1befeb5f77027cc09eb39a8174703d002747504a7a7 not found: ID does not exist" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.188510 4644 scope.go:117] "RemoveContainer" containerID="2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6" Feb 04 08:45:08 crc kubenswrapper[4644]: E0204 08:45:08.188808 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6\": container with ID starting with 2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6 not found: ID does not exist" containerID="2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.188827 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6"} err="failed to get container status \"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6\": rpc error: code = NotFound desc = could not find container \"2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6\": container with ID starting with 2c5a4f272e1793d3f42870cf4d4ed94609f60e16c256d926a3991e3734e7f9a6 not found: ID does not exist" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.232621 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.424560 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.430388 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2qxgg"] Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.548104 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-8lq9c"] Feb 04 08:45:08 crc kubenswrapper[4644]: W0204 08:45:08.562209 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf5c002_73f4_4e70_bb65_a9b2a7b32609.slice/crio-cb5892dbcb615337d3141d1ffe235d7def0daa627d9ed64118bda6ec6708ba8d WatchSource:0}: Error finding container cb5892dbcb615337d3141d1ffe235d7def0daa627d9ed64118bda6ec6708ba8d: Status 404 returned error can't find the container with id cb5892dbcb615337d3141d1ffe235d7def0daa627d9ed64118bda6ec6708ba8d Feb 04 08:45:08 crc kubenswrapper[4644]: I0204 08:45:08.671596 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" path="/var/lib/kubelet/pods/0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca/volumes" Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.152257 4644 generic.go:334] "Generic (PLEG): container finished" podID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerID="44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3" exitCode=0 Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.152371 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerDied","Data":"44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.158630 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerStarted","Data":"6fae6705cc055233d73367795c23d4dac1726e36be52a1f7ab506b4c9a0e9754"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.162207 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" event={"ID":"dbf5c002-73f4-4e70-bb65-a9b2a7b32609","Type":"ContainerStarted","Data":"be06d4588e17bc0af4c76dce8aafe1796297116c416411c07e751c836864f836"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.162247 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" event={"ID":"dbf5c002-73f4-4e70-bb65-a9b2a7b32609","Type":"ContainerStarted","Data":"cb5892dbcb615337d3141d1ffe235d7def0daa627d9ed64118bda6ec6708ba8d"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.163049 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.167745 4644 generic.go:334] "Generic (PLEG): container finished" podID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerID="d94d7ff194b014d97b968aa63f89c9b91a3cc8ed7b160c9144395d8e31f8c9d4" exitCode=0 Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.167818 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerDied","Data":"d94d7ff194b014d97b968aa63f89c9b91a3cc8ed7b160c9144395d8e31f8c9d4"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.173407 4644 generic.go:334] "Generic (PLEG): container finished" podID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerID="4143c3ba762cb2f6dc15f2881775afc527627aa1268fdf18342fced9094b36d8" exitCode=0 Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.173441 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerDied","Data":"4143c3ba762cb2f6dc15f2881775afc527627aa1268fdf18342fced9094b36d8"} Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.200426 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8qh5" podStartSLOduration=4.5438700910000005 podStartE2EDuration="1m7.200401972s" podCreationTimestamp="2026-02-04 08:44:02 +0000 UTC" firstStartedPulling="2026-02-04 08:44:05.018384334 +0000 UTC m=+155.058442089" lastFinishedPulling="2026-02-04 08:45:07.674916215 +0000 UTC m=+217.714973970" observedRunningTime="2026-02-04 08:45:09.196404124 +0000 UTC m=+219.236461889" watchObservedRunningTime="2026-02-04 08:45:09.200401972 +0000 UTC m=+219.240459727" Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.249007 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" podStartSLOduration=30.248990691 podStartE2EDuration="30.248990691s" podCreationTimestamp="2026-02-04 08:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:45:09.244631242 +0000 UTC m=+219.284689017" watchObservedRunningTime="2026-02-04 08:45:09.248990691 +0000 UTC m=+219.289048446" Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.803377 4644 patch_prober.go:28] interesting pod/oauth-openshift-86d85988f6-8lq9c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 04 08:45:09 crc kubenswrapper[4644]: [+]log ok Feb 04 08:45:09 crc kubenswrapper[4644]: [+]poststarthook/max-in-flight-filter ok Feb 04 08:45:09 crc kubenswrapper[4644]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Feb 04 08:45:09 crc kubenswrapper[4644]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 04 08:45:09 crc kubenswrapper[4644]: healthz check failed Feb 04 08:45:09 crc kubenswrapper[4644]: I0204 08:45:09.803452 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" podUID="dbf5c002-73f4-4e70-bb65-a9b2a7b32609" containerName="oauth-openshift" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 08:45:10 crc kubenswrapper[4644]: I0204 08:45:10.187971 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerStarted","Data":"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26"} Feb 04 08:45:10 crc kubenswrapper[4644]: I0204 08:45:10.194627 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86d85988f6-8lq9c" Feb 04 08:45:11 crc kubenswrapper[4644]: I0204 08:45:11.221297 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s99cb" podStartSLOduration=3.439317886 podStartE2EDuration="1m8.221276803s" podCreationTimestamp="2026-02-04 08:44:03 +0000 UTC" firstStartedPulling="2026-02-04 08:44:04.920846399 +0000 UTC m=+154.960904154" lastFinishedPulling="2026-02-04 08:45:09.702805316 +0000 UTC m=+219.742863071" observedRunningTime="2026-02-04 08:45:11.218293422 +0000 UTC m=+221.258351177" watchObservedRunningTime="2026-02-04 08:45:11.221276803 +0000 UTC m=+221.261334568" Feb 04 08:45:12 crc kubenswrapper[4644]: I0204 08:45:12.200267 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerStarted","Data":"e5a0811975c883b6e650f10fc037ea49beb9a4c9ad4f0d2ced74383cf0e59c06"} Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.053697 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.057140 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.092458 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.222954 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sltvp" podStartSLOduration=4.14546287 podStartE2EDuration="1m8.222921851s" podCreationTimestamp="2026-02-04 08:44:05 +0000 UTC" firstStartedPulling="2026-02-04 08:44:07.191950118 +0000 UTC m=+157.232007873" lastFinishedPulling="2026-02-04 08:45:11.269409089 +0000 UTC m=+221.309466854" observedRunningTime="2026-02-04 08:45:13.221529244 +0000 UTC m=+223.261587019" watchObservedRunningTime="2026-02-04 08:45:13.222921851 +0000 UTC m=+223.262979606" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.252076 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.271708 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.271753 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.387533 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.475252 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.475313 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:13 crc kubenswrapper[4644]: I0204 08:45:13.539028 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:14 crc kubenswrapper[4644]: I0204 08:45:14.215437 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerStarted","Data":"535f50f0c7df3a1a31f2ddab8034dd702c78096816fe6a34529dfb2b6090f94f"} Feb 04 08:45:14 crc kubenswrapper[4644]: I0204 08:45:14.276024 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:14 crc kubenswrapper[4644]: I0204 08:45:14.286601 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.243845 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r5qf8" podStartSLOduration=4.693537207 podStartE2EDuration="1m11.243819103s" podCreationTimestamp="2026-02-04 08:44:04 +0000 UTC" firstStartedPulling="2026-02-04 08:44:07.188857374 +0000 UTC m=+157.228915129" lastFinishedPulling="2026-02-04 08:45:13.73913927 +0000 UTC m=+223.779197025" observedRunningTime="2026-02-04 08:45:15.240634086 +0000 UTC m=+225.280691851" watchObservedRunningTime="2026-02-04 08:45:15.243819103 +0000 UTC m=+225.283876888" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.297029 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.297073 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.658085 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.658401 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.696428 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:45:15 crc kubenswrapper[4644]: I0204 08:45:15.699423 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.231555 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerStarted","Data":"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c"} Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.231692 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s99cb" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="registry-server" containerID="cri-o://c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26" gracePeriod=2 Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.250624 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.252684 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.312955 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.336741 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rpn5" podStartSLOduration=3.425097021 podStartE2EDuration="1m11.336716391s" podCreationTimestamp="2026-02-04 08:44:05 +0000 UTC" firstStartedPulling="2026-02-04 08:44:07.157581214 +0000 UTC m=+157.197638969" lastFinishedPulling="2026-02-04 08:45:15.069200584 +0000 UTC m=+225.109258339" observedRunningTime="2026-02-04 08:45:16.276604039 +0000 UTC m=+226.316661834" watchObservedRunningTime="2026-02-04 08:45:16.336716391 +0000 UTC m=+226.376774166" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.355558 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-r5qf8" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="registry-server" probeResult="failure" output=< Feb 04 08:45:16 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:45:16 crc kubenswrapper[4644]: > Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.554924 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.580137 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content\") pod \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.580498 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blbj2\" (UniqueName: \"kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2\") pod \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.580529 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities\") pod \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\" (UID: \"e6faaab9-25f9-47fc-a713-f4b6bde29cbc\") " Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.581402 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities" (OuterVolumeSpecName: "utilities") pod "e6faaab9-25f9-47fc-a713-f4b6bde29cbc" (UID: "e6faaab9-25f9-47fc-a713-f4b6bde29cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.589692 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2" (OuterVolumeSpecName: "kube-api-access-blbj2") pod "e6faaab9-25f9-47fc-a713-f4b6bde29cbc" (UID: "e6faaab9-25f9-47fc-a713-f4b6bde29cbc"). InnerVolumeSpecName "kube-api-access-blbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.632952 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6faaab9-25f9-47fc-a713-f4b6bde29cbc" (UID: "e6faaab9-25f9-47fc-a713-f4b6bde29cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.656461 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.658549 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.681257 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blbj2\" (UniqueName: \"kubernetes.io/projected/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-kube-api-access-blbj2\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.681282 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.681294 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6faaab9-25f9-47fc-a713-f4b6bde29cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:16 crc kubenswrapper[4644]: I0204 08:45:16.696212 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.243579 4644 generic.go:334] "Generic (PLEG): container finished" podID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerID="c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26" exitCode=0 Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.243702 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s99cb" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.243759 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerDied","Data":"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26"} Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.243812 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s99cb" event={"ID":"e6faaab9-25f9-47fc-a713-f4b6bde29cbc","Type":"ContainerDied","Data":"018f1e6dd29eb4dfc9561d16fd3d8d7243672a6440e09471961a400614a2569a"} Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.243833 4644 scope.go:117] "RemoveContainer" containerID="c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.282737 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.282796 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s99cb"] Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.292197 4644 scope.go:117] "RemoveContainer" containerID="ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.312485 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.322994 4644 scope.go:117] "RemoveContainer" containerID="3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.326948 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rpn5" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="registry-server" probeResult="failure" output=< Feb 04 08:45:17 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:45:17 crc kubenswrapper[4644]: > Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.345704 4644 scope.go:117] "RemoveContainer" containerID="c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26" Feb 04 08:45:17 crc kubenswrapper[4644]: E0204 08:45:17.346966 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26\": container with ID starting with c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26 not found: ID does not exist" containerID="c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.347021 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26"} err="failed to get container status \"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26\": rpc error: code = NotFound desc = could not find container \"c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26\": container with ID starting with c95600da07fbb7f0022af825e023b3cb7615ea044ed4a76045174350adee7f26 not found: ID does not exist" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.347055 4644 scope.go:117] "RemoveContainer" containerID="ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775" Feb 04 08:45:17 crc kubenswrapper[4644]: E0204 08:45:17.347549 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775\": container with ID starting with ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775 not found: ID does not exist" containerID="ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.347584 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775"} err="failed to get container status \"ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775\": rpc error: code = NotFound desc = could not find container \"ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775\": container with ID starting with ce6b4a28555a3dcea01dae17074d8e977b6e5beb35b380e47c1e57ecbc24a775 not found: ID does not exist" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.347613 4644 scope.go:117] "RemoveContainer" containerID="3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8" Feb 04 08:45:17 crc kubenswrapper[4644]: E0204 08:45:17.347863 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8\": container with ID starting with 3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8 not found: ID does not exist" containerID="3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8" Feb 04 08:45:17 crc kubenswrapper[4644]: I0204 08:45:17.347887 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8"} err="failed to get container status \"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8\": rpc error: code = NotFound desc = could not find container \"3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8\": container with ID starting with 3fe6cdc8a27fe5b75a9358280358fb16ca3f2d605dedb9b838a82ef53efe75a8 not found: ID does not exist" Feb 04 08:45:18 crc kubenswrapper[4644]: I0204 08:45:18.666478 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" path="/var/lib/kubelet/pods/e6faaab9-25f9-47fc-a713-f4b6bde29cbc/volumes" Feb 04 08:45:20 crc kubenswrapper[4644]: I0204 08:45:20.093742 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:45:20 crc kubenswrapper[4644]: I0204 08:45:20.261508 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qf6hm" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="registry-server" containerID="cri-o://e270362387861a755652f40a597747e3a8eae97058c060413e73db904e8d5ab9" gracePeriod=2 Feb 04 08:45:20 crc kubenswrapper[4644]: I0204 08:45:20.692846 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:45:20 crc kubenswrapper[4644]: I0204 08:45:20.695634 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sltvp" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="registry-server" containerID="cri-o://e5a0811975c883b6e650f10fc037ea49beb9a4c9ad4f0d2ced74383cf0e59c06" gracePeriod=2 Feb 04 08:45:21 crc kubenswrapper[4644]: I0204 08:45:21.266745 4644 generic.go:334] "Generic (PLEG): container finished" podID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerID="e270362387861a755652f40a597747e3a8eae97058c060413e73db904e8d5ab9" exitCode=0 Feb 04 08:45:21 crc kubenswrapper[4644]: I0204 08:45:21.266964 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerDied","Data":"e270362387861a755652f40a597747e3a8eae97058c060413e73db904e8d5ab9"} Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.155816 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.160254 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content\") pod \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.160572 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities\") pod \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.160608 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7kd9\" (UniqueName: \"kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9\") pod \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\" (UID: \"811802ba-9b9d-4e09-ac7d-7a62eecabb15\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.162364 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities" (OuterVolumeSpecName: "utilities") pod "811802ba-9b9d-4e09-ac7d-7a62eecabb15" (UID: "811802ba-9b9d-4e09-ac7d-7a62eecabb15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.177152 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9" (OuterVolumeSpecName: "kube-api-access-x7kd9") pod "811802ba-9b9d-4e09-ac7d-7a62eecabb15" (UID: "811802ba-9b9d-4e09-ac7d-7a62eecabb15"). InnerVolumeSpecName "kube-api-access-x7kd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.262917 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.262951 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7kd9\" (UniqueName: \"kubernetes.io/projected/811802ba-9b9d-4e09-ac7d-7a62eecabb15-kube-api-access-x7kd9\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.273856 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf6hm" event={"ID":"811802ba-9b9d-4e09-ac7d-7a62eecabb15","Type":"ContainerDied","Data":"987dc7ee01d5fa2c864af2362f7e85c11e5b8e1a1b1a657d2a2f5b45f96611bc"} Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.273888 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf6hm" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.273928 4644 scope.go:117] "RemoveContainer" containerID="e270362387861a755652f40a597747e3a8eae97058c060413e73db904e8d5ab9" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.275346 4644 generic.go:334] "Generic (PLEG): container finished" podID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerID="e5a0811975c883b6e650f10fc037ea49beb9a4c9ad4f0d2ced74383cf0e59c06" exitCode=0 Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.275395 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerDied","Data":"e5a0811975c883b6e650f10fc037ea49beb9a4c9ad4f0d2ced74383cf0e59c06"} Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.275444 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sltvp" event={"ID":"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc","Type":"ContainerDied","Data":"aabe1ab29c406bb22c689a161fd37360566cbdf314f828d640f783cac05b0526"} Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.275457 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aabe1ab29c406bb22c689a161fd37360566cbdf314f828d640f783cac05b0526" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.287875 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.292494 4644 scope.go:117] "RemoveContainer" containerID="4ed61e9d7d02ed5864450dce1dc8c101a53e53f57663636a91bfe0e8b4b8fa45" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.320416 4644 scope.go:117] "RemoveContainer" containerID="02a36b734c5b61a1e06a181a90e34731f13daeb26e52186dc4dd2a922cf3e1c5" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.363519 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vv7c\" (UniqueName: \"kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c\") pod \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.364611 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content\") pod \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.364765 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities\") pod \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\" (UID: \"b0dd29e1-253e-48f5-9e1f-df4c9493fcbc\") " Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.365699 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities" (OuterVolumeSpecName: "utilities") pod "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" (UID: "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.368964 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "811802ba-9b9d-4e09-ac7d-7a62eecabb15" (UID: "811802ba-9b9d-4e09-ac7d-7a62eecabb15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.374847 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c" (OuterVolumeSpecName: "kube-api-access-5vv7c") pod "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" (UID: "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc"). InnerVolumeSpecName "kube-api-access-5vv7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.386677 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" (UID: "b0dd29e1-253e-48f5-9e1f-df4c9493fcbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.466535 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.466573 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vv7c\" (UniqueName: \"kubernetes.io/projected/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-kube-api-access-5vv7c\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.466587 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.466599 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811802ba-9b9d-4e09-ac7d-7a62eecabb15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.603207 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.606335 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qf6hm"] Feb 04 08:45:22 crc kubenswrapper[4644]: I0204 08:45:22.666911 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" path="/var/lib/kubelet/pods/811802ba-9b9d-4e09-ac7d-7a62eecabb15/volumes" Feb 04 08:45:22 crc kubenswrapper[4644]: E0204 08:45:22.695426 4644 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod811802ba_9b9d_4e09_ac7d_7a62eecabb15.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod811802ba_9b9d_4e09_ac7d_7a62eecabb15.slice/crio-987dc7ee01d5fa2c864af2362f7e85c11e5b8e1a1b1a657d2a2f5b45f96611bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0dd29e1_253e_48f5_9e1f_df4c9493fcbc.slice\": RecentStats: unable to find data in memory cache]" Feb 04 08:45:23 crc kubenswrapper[4644]: I0204 08:45:23.281425 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sltvp" Feb 04 08:45:23 crc kubenswrapper[4644]: I0204 08:45:23.300052 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:45:23 crc kubenswrapper[4644]: I0204 08:45:23.306459 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sltvp"] Feb 04 08:45:24 crc kubenswrapper[4644]: I0204 08:45:24.666542 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" path="/var/lib/kubelet/pods/b0dd29e1-253e-48f5-9e1f-df4c9493fcbc/volumes" Feb 04 08:45:25 crc kubenswrapper[4644]: I0204 08:45:25.342729 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:25 crc kubenswrapper[4644]: I0204 08:45:25.377008 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:26 crc kubenswrapper[4644]: I0204 08:45:26.294110 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:26 crc kubenswrapper[4644]: I0204 08:45:26.329552 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.032600 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7zgx"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.033316 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g7zgx" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="registry-server" containerID="cri-o://1ec83c5682ac8a364107cc4e99993b4106c0dcc01c2c8d67159101b76d369698" gracePeriod=30 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.045039 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8qh5"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.047809 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8qh5" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="registry-server" containerID="cri-o://6fae6705cc055233d73367795c23d4dac1726e36be52a1f7ab506b4c9a0e9754" gracePeriod=30 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.081019 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bgkbq"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.082607 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" containerID="cri-o://23e31ac2682616e57d2c0ca3938d057f40052fea7bad6a7f6be7dee72651a923" gracePeriod=30 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.089825 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5qf8"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.090128 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r5qf8" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="registry-server" containerID="cri-o://535f50f0c7df3a1a31f2ddab8034dd702c78096816fe6a34529dfb2b6090f94f" gracePeriod=30 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.104206 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rpn5"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110678 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhnlp"] Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110882 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110892 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110904 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110909 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110918 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110924 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110931 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110937 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110946 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110951 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110960 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110967 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110975 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110981 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.110988 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.110994 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.111003 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111008 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.111017 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111022 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.111033 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111039 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.111046 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111052 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111379 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="811802ba-9b9d-4e09-ac7d-7a62eecabb15" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111389 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0dd29e1-253e-48f5-9e1f-df4c9493fcbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111399 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6faaab9-25f9-47fc-a713-f4b6bde29cbc" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.111409 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbe6697-c0eb-45cc-9e46-23dbf1bb1bca" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.112022 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhnlp"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.112093 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.255956 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.256019 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.256052 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8np\" (UniqueName: \"kubernetes.io/projected/42a112d4-2c64-4c7f-a895-a85e29b12d8a-kube-api-access-gh8np\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.308610 4644 generic.go:334] "Generic (PLEG): container finished" podID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerID="6fae6705cc055233d73367795c23d4dac1726e36be52a1f7ab506b4c9a0e9754" exitCode=0 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.308679 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerDied","Data":"6fae6705cc055233d73367795c23d4dac1726e36be52a1f7ab506b4c9a0e9754"} Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.314715 4644 generic.go:334] "Generic (PLEG): container finished" podID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerID="23e31ac2682616e57d2c0ca3938d057f40052fea7bad6a7f6be7dee72651a923" exitCode=0 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.314784 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" event={"ID":"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c","Type":"ContainerDied","Data":"23e31ac2682616e57d2c0ca3938d057f40052fea7bad6a7f6be7dee72651a923"} Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.320356 4644 generic.go:334] "Generic (PLEG): container finished" podID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerID="535f50f0c7df3a1a31f2ddab8034dd702c78096816fe6a34529dfb2b6090f94f" exitCode=0 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.320502 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerDied","Data":"535f50f0c7df3a1a31f2ddab8034dd702c78096816fe6a34529dfb2b6090f94f"} Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.325907 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerID="1ec83c5682ac8a364107cc4e99993b4106c0dcc01c2c8d67159101b76d369698" exitCode=0 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.326116 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rpn5" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="registry-server" containerID="cri-o://dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c" gracePeriod=30 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.326449 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerDied","Data":"1ec83c5682ac8a364107cc4e99993b4106c0dcc01c2c8d67159101b76d369698"} Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.358453 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.358520 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.358551 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8np\" (UniqueName: \"kubernetes.io/projected/42a112d4-2c64-4c7f-a895-a85e29b12d8a-kube-api-access-gh8np\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.359486 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.365673 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a112d4-2c64-4c7f-a895-a85e29b12d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.374427 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8np\" (UniqueName: \"kubernetes.io/projected/42a112d4-2c64-4c7f-a895-a85e29b12d8a-kube-api-access-gh8np\") pod \"marketplace-operator-79b997595-nhnlp\" (UID: \"42a112d4-2c64-4c7f-a895-a85e29b12d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.490099 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.521307 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.534147 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.553166 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.629543 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.680688 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content\") pod \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.680735 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content\") pod \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.680757 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnn6h\" (UniqueName: \"kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h\") pod \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.681029 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cln8h\" (UniqueName: \"kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h\") pod \"55e01c73-5587-45e5-9a8f-47fedc43d340\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.681115 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities\") pod \"55e01c73-5587-45e5-9a8f-47fedc43d340\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682155 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content\") pod \"55e01c73-5587-45e5-9a8f-47fedc43d340\" (UID: \"55e01c73-5587-45e5-9a8f-47fedc43d340\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682192 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zpmh\" (UniqueName: \"kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh\") pod \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682242 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics\") pod \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682264 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca\") pod \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\" (UID: \"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682281 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9sn2\" (UniqueName: \"kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2\") pod \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682317 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities\") pod \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\" (UID: \"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.682365 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities\") pod \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\" (UID: \"b95f491a-610d-44ed-ae19-9e5b7ac25f52\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.691100 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities" (OuterVolumeSpecName: "utilities") pod "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" (UID: "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.692781 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities" (OuterVolumeSpecName: "utilities") pod "55e01c73-5587-45e5-9a8f-47fedc43d340" (UID: "55e01c73-5587-45e5-9a8f-47fedc43d340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.692071 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities" (OuterVolumeSpecName: "utilities") pod "b95f491a-610d-44ed-ae19-9e5b7ac25f52" (UID: "b95f491a-610d-44ed-ae19-9e5b7ac25f52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.693359 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" (UID: "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.697811 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh" (OuterVolumeSpecName: "kube-api-access-8zpmh") pod "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" (UID: "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c"). InnerVolumeSpecName "kube-api-access-8zpmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.698932 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" (UID: "3aa7cc3c-1068-4427-a0c2-24e952c5ed2c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.699196 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h" (OuterVolumeSpecName: "kube-api-access-cln8h") pod "55e01c73-5587-45e5-9a8f-47fedc43d340" (UID: "55e01c73-5587-45e5-9a8f-47fedc43d340"). InnerVolumeSpecName "kube-api-access-cln8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.701101 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h" (OuterVolumeSpecName: "kube-api-access-vnn6h") pod "b95f491a-610d-44ed-ae19-9e5b7ac25f52" (UID: "b95f491a-610d-44ed-ae19-9e5b7ac25f52"). InnerVolumeSpecName "kube-api-access-vnn6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.722487 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2" (OuterVolumeSpecName: "kube-api-access-f9sn2") pod "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" (UID: "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd"). InnerVolumeSpecName "kube-api-access-f9sn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.771192 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.776777 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55e01c73-5587-45e5-9a8f-47fedc43d340" (UID: "55e01c73-5587-45e5-9a8f-47fedc43d340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.784767 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmwdg\" (UniqueName: \"kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg\") pod \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.784920 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content\") pod \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785001 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities\") pod \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\" (UID: \"8c424c9d-56cc-42b6-95b2-c23ff3ed8846\") " Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785375 4644 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785389 4644 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785399 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9sn2\" (UniqueName: \"kubernetes.io/projected/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-kube-api-access-f9sn2\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785408 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785417 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785440 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnn6h\" (UniqueName: \"kubernetes.io/projected/b95f491a-610d-44ed-ae19-9e5b7ac25f52-kube-api-access-vnn6h\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785449 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cln8h\" (UniqueName: \"kubernetes.io/projected/55e01c73-5587-45e5-9a8f-47fedc43d340-kube-api-access-cln8h\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785457 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785467 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e01c73-5587-45e5-9a8f-47fedc43d340-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.785476 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zpmh\" (UniqueName: \"kubernetes.io/projected/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c-kube-api-access-8zpmh\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.787004 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities" (OuterVolumeSpecName: "utilities") pod "8c424c9d-56cc-42b6-95b2-c23ff3ed8846" (UID: "8c424c9d-56cc-42b6-95b2-c23ff3ed8846"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.796563 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg" (OuterVolumeSpecName: "kube-api-access-fmwdg") pod "8c424c9d-56cc-42b6-95b2-c23ff3ed8846" (UID: "8c424c9d-56cc-42b6-95b2-c23ff3ed8846"). InnerVolumeSpecName "kube-api-access-fmwdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.841139 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b95f491a-610d-44ed-ae19-9e5b7ac25f52" (UID: "b95f491a-610d-44ed-ae19-9e5b7ac25f52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.843519 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" (UID: "b6b3f648-8d29-4f2c-b3f0-cb29c65133bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.889031 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmwdg\" (UniqueName: \"kubernetes.io/projected/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-kube-api-access-fmwdg\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.889070 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.889082 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95f491a-610d-44ed-ae19-9e5b7ac25f52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.889090 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.912756 4644 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.913225 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.913348 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.913430 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.913501 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.913584 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.913661 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.913738 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.913793 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.913874 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.913949 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914040 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914117 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914196 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914251 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914348 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914429 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914504 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914561 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914638 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914713 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914792 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.914848 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.914953 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915057 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="extract-content" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.915157 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915231 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="extract-utilities" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915398 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915515 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915590 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" containerName="marketplace-operator" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915664 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.915724 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerName="registry-server" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916147 4644 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916254 4644 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916395 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.916430 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916628 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89" gracePeriod=15 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916665 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.916835 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916927 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.916997 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.917852 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.917921 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.918024 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.918140 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.918266 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.918515 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.918597 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.918653 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.918712 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 08:45:28 crc kubenswrapper[4644]: E0204 08:45:28.918769 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.918851 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916586 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b" gracePeriod=15 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916624 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da" gracePeriod=15 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916712 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9" gracePeriod=15 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.916716 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644" gracePeriod=15 Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919170 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919434 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919457 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919469 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919476 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919484 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.919493 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.920101 4644 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.966870 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991030 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991091 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991118 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991160 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991192 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991216 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991238 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:28 crc kubenswrapper[4644]: I0204 08:45:28.991259 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.001420 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c424c9d-56cc-42b6-95b2-c23ff3ed8846" (UID: "8c424c9d-56cc-42b6-95b2-c23ff3ed8846"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093308 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093315 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093484 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093540 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093577 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093591 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093603 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093647 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093650 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093667 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093688 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093726 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093729 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093750 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093854 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.093939 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.094004 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c424c9d-56cc-42b6-95b2-c23ff3ed8846-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.257138 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:45:29 crc kubenswrapper[4644]: W0204 08:45:29.278990 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f7133fa3b15ce0139a29ed96a5f23b017427ca61f696d0e4b3cb035e25bbb3b5 WatchSource:0}: Error finding container f7133fa3b15ce0139a29ed96a5f23b017427ca61f696d0e4b3cb035e25bbb3b5: Status 404 returned error can't find the container with id f7133fa3b15ce0139a29ed96a5f23b017427ca61f696d0e4b3cb035e25bbb3b5 Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.281856 4644 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890feba360ddd13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 08:45:29.280855315 +0000 UTC m=+239.320913070,LastTimestamp:2026-02-04 08:45:29.280855315 +0000 UTC m=+239.320913070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.314637 4644 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 04 08:45:29 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6" Netns:"/var/run/netns/87b23508-1b43-436d-99c3-24cd9385e0dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:29 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:29 crc kubenswrapper[4644]: > Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.314737 4644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 04 08:45:29 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6" Netns:"/var/run/netns/87b23508-1b43-436d-99c3-24cd9385e0dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:29 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:29 crc kubenswrapper[4644]: > pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.314763 4644 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 04 08:45:29 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6" Netns:"/var/run/netns/87b23508-1b43-436d-99c3-24cd9385e0dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:29 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:29 crc kubenswrapper[4644]: > pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.314828 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6\\\" Netns:\\\"/var/run/netns/87b23508-1b43-436d-99c3-24cd9385e0dd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=5e32bd53e0a0103b6cd86575f7c88e563d66d7214b2c4cb78385cbdb0a8c46e6;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s\\\": dial tcp 38.102.83.136:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.337385 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7zgx" event={"ID":"b6b3f648-8d29-4f2c-b3f0-cb29c65133bd","Type":"ContainerDied","Data":"b279dc5e2ebe75ddbaf718adf9e9a387f4573db944f6325199183491290265eb"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.337893 4644 scope.go:117] "RemoveContainer" containerID="1ec83c5682ac8a364107cc4e99993b4106c0dcc01c2c8d67159101b76d369698" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.337408 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7zgx" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.338873 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.339084 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.345267 4644 generic.go:334] "Generic (PLEG): container finished" podID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" containerID="dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c" exitCode=0 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.345424 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerDied","Data":"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.345463 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rpn5" event={"ID":"8c424c9d-56cc-42b6-95b2-c23ff3ed8846","Type":"ContainerDied","Data":"9a05a60cd3cd2cccf7b61ec0fad9285b25d6fe8b4fbe21fcd4870af78f36b277"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.345642 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rpn5" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.346419 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.346686 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.346898 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.348516 4644 generic.go:334] "Generic (PLEG): container finished" podID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" containerID="25851de601f33307b0e8432c78c0f8cf8e91b6d98fb092ee8b97576d713453ea" exitCode=0 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.348568 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"967f5ee0-e078-4ed0-83ae-7147dcd7192e","Type":"ContainerDied","Data":"25851de601f33307b0e8432c78c0f8cf8e91b6d98fb092ee8b97576d713453ea"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.349006 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.349227 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.349481 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.349694 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.351867 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8qh5" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.352612 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8qh5" event={"ID":"b95f491a-610d-44ed-ae19-9e5b7ac25f52","Type":"ContainerDied","Data":"51c144d548573cd0f00112c498949919227bb0d3b69860d89433bdc80afc930c"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.352714 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.352974 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.353252 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.353446 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.353584 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.360355 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.364807 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.366588 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da" exitCode=0 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.366623 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9" exitCode=0 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.366633 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89" exitCode=0 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.366641 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644" exitCode=2 Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.369499 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" event={"ID":"3aa7cc3c-1068-4427-a0c2-24e952c5ed2c","Type":"ContainerDied","Data":"aa7d4ec95ebb3ddaac43ab84ca1ac75daf9f4faef66c921f30cb1065716ed63e"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.369605 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.371076 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f7133fa3b15ce0139a29ed96a5f23b017427ca61f696d0e4b3cb035e25bbb3b5"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.371211 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.371586 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.371842 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.372116 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.372302 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.372540 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.373429 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.373771 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.374138 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.374500 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.374707 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.374914 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.376872 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377061 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377224 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377291 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5qf8" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377231 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5qf8" event={"ID":"55e01c73-5587-45e5-9a8f-47fedc43d340","Type":"ContainerDied","Data":"32edb925bcd11bf4d4e19103b5b08cc3086ab28cb8e1d9fbc5314a710887180f"} Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377408 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377542 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377686 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377842 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.377987 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.378124 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.378373 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.378655 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.379088 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.379903 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.380250 4644 scope.go:117] "RemoveContainer" containerID="497903666e07ca6a5db97517474fa891b04cbf658d8ba3bf3447c384a57b28b7" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.396160 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.396384 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.396642 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.396814 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.396981 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.397231 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.397453 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.431717 4644 scope.go:117] "RemoveContainer" containerID="6ca1d7bd92a38f81eef5a9af25954dc266a6f587c882be92e2081daa2d0cbefd" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.450985 4644 scope.go:117] "RemoveContainer" containerID="dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.466792 4644 scope.go:117] "RemoveContainer" containerID="44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.507628 4644 scope.go:117] "RemoveContainer" containerID="ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.529556 4644 scope.go:117] "RemoveContainer" containerID="dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.529985 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c\": container with ID starting with dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c not found: ID does not exist" containerID="dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.530014 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c"} err="failed to get container status \"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c\": rpc error: code = NotFound desc = could not find container \"dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c\": container with ID starting with dd99e74ea3b8711beddea0e6d75c386fefa83744f04081e045315a412ef61d6c not found: ID does not exist" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.530043 4644 scope.go:117] "RemoveContainer" containerID="44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.530321 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3\": container with ID starting with 44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3 not found: ID does not exist" containerID="44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.530554 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3"} err="failed to get container status \"44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3\": rpc error: code = NotFound desc = could not find container \"44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3\": container with ID starting with 44e4e6c3e6c2f6ef52783befef4943dd4caa3523675ee9484612c10dd716fdb3 not found: ID does not exist" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.530574 4644 scope.go:117] "RemoveContainer" containerID="ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161" Feb 04 08:45:29 crc kubenswrapper[4644]: E0204 08:45:29.531032 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161\": container with ID starting with ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161 not found: ID does not exist" containerID="ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.531060 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161"} err="failed to get container status \"ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161\": rpc error: code = NotFound desc = could not find container \"ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161\": container with ID starting with ecaf9505796715e88d96d4f292e4788a4368b104954edbebfb6b6ad9e8d44161 not found: ID does not exist" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.531081 4644 scope.go:117] "RemoveContainer" containerID="6fae6705cc055233d73367795c23d4dac1726e36be52a1f7ab506b4c9a0e9754" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.552898 4644 scope.go:117] "RemoveContainer" containerID="fd1a854ffc0f64b72fd9836aaa825e196d38f7f79dbfe82aae03a20bb8f16284" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.570672 4644 scope.go:117] "RemoveContainer" containerID="ec8a89dc8cf5de2bb8235f9b5e24ef38b1c0b2083c7fa631c55f656125a01535" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.583607 4644 scope.go:117] "RemoveContainer" containerID="0eee0b1c28b4d1a6104879ddaaaaba9a48c63d158e5d9d617085705a0f00c269" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.609958 4644 scope.go:117] "RemoveContainer" containerID="23e31ac2682616e57d2c0ca3938d057f40052fea7bad6a7f6be7dee72651a923" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.632170 4644 scope.go:117] "RemoveContainer" containerID="535f50f0c7df3a1a31f2ddab8034dd702c78096816fe6a34529dfb2b6090f94f" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.648308 4644 scope.go:117] "RemoveContainer" containerID="d94d7ff194b014d97b968aa63f89c9b91a3cc8ed7b160c9144395d8e31f8c9d4" Feb 04 08:45:29 crc kubenswrapper[4644]: I0204 08:45:29.662408 4644 scope.go:117] "RemoveContainer" containerID="7eec73653c7630c9b93779e74d3f619d7bbb15afda1f992be0bb5da6116236e5" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.387377 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.389475 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d056f3d01918dd2d04a5271731b5ff74284d753e712f6cc3c9aae3c729e3ac52"} Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.390381 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.390749 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.391013 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.391266 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.391504 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.391714 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.391920 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.507247 4644 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.509320 4644 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.509620 4644 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.509875 4644 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.510126 4644 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.510172 4644 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.510433 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.614351 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.615086 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.615349 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.615652 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.615861 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.616162 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.616465 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.616827 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.661160 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.661746 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.662075 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.662301 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.662607 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.662799 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.663041 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:30 crc kubenswrapper[4644]: E0204 08:45:30.711249 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.720543 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access\") pod \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.720724 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir\") pod \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.720852 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "967f5ee0-e078-4ed0-83ae-7147dcd7192e" (UID: "967f5ee0-e078-4ed0-83ae-7147dcd7192e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.720899 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock" (OuterVolumeSpecName: "var-lock") pod "967f5ee0-e078-4ed0-83ae-7147dcd7192e" (UID: "967f5ee0-e078-4ed0-83ae-7147dcd7192e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.720895 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock\") pod \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\" (UID: \"967f5ee0-e078-4ed0-83ae-7147dcd7192e\") " Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.721406 4644 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.721424 4644 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.726574 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "967f5ee0-e078-4ed0-83ae-7147dcd7192e" (UID: "967f5ee0-e078-4ed0-83ae-7147dcd7192e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:45:30 crc kubenswrapper[4644]: I0204 08:45:30.822403 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/967f5ee0-e078-4ed0-83ae-7147dcd7192e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.112436 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.397444 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.398469 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.399095 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.399288 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.400360 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.400568 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.400780 4644 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.400964 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.401192 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.401570 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.411579 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.412493 4644 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b" exitCode=0 Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.412582 4644 scope.go:117] "RemoveContainer" containerID="001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.412737 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.416642 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"967f5ee0-e078-4ed0-83ae-7147dcd7192e","Type":"ContainerDied","Data":"e7356a07019162a3402cc8912972b9b13b74c59b028a84ebab95f985671d687f"} Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.416706 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7356a07019162a3402cc8912972b9b13b74c59b028a84ebab95f985671d687f" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.416836 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.425905 4644 scope.go:117] "RemoveContainer" containerID="532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.428874 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.429027 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.429155 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.429290 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.429614 4644 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.429957 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.430115 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.430266 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.448409 4644 scope.go:117] "RemoveContainer" containerID="af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.461494 4644 scope.go:117] "RemoveContainer" containerID="b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.475933 4644 scope.go:117] "RemoveContainer" containerID="fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.479945 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.479996 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480017 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480047 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480063 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480153 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480358 4644 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480376 4644 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.480386 4644 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.491980 4644 scope.go:117] "RemoveContainer" containerID="884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.513078 4644 scope.go:117] "RemoveContainer" containerID="001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.513635 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\": container with ID starting with 001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da not found: ID does not exist" containerID="001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.513676 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da"} err="failed to get container status \"001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\": rpc error: code = NotFound desc = could not find container \"001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da\": container with ID starting with 001a2b1e187f40943f5963143552edcf329124d1f524ef272960b24bf8b8e5da not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.513707 4644 scope.go:117] "RemoveContainer" containerID="532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.514432 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\": container with ID starting with 532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9 not found: ID does not exist" containerID="532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.514464 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9"} err="failed to get container status \"532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\": rpc error: code = NotFound desc = could not find container \"532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9\": container with ID starting with 532ab1f6680a8a51e1c233df43f3258499159e489b66cdb67a0c65e497a002e9 not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.514485 4644 scope.go:117] "RemoveContainer" containerID="af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.514766 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\": container with ID starting with af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89 not found: ID does not exist" containerID="af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.514796 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89"} err="failed to get container status \"af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\": rpc error: code = NotFound desc = could not find container \"af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89\": container with ID starting with af6c473c981acf3ba3f7038fd6a21524e6224151a8b4462a74f43b266eaf0d89 not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.514815 4644 scope.go:117] "RemoveContainer" containerID="b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.515099 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\": container with ID starting with b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644 not found: ID does not exist" containerID="b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.515126 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644"} err="failed to get container status \"b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\": rpc error: code = NotFound desc = could not find container \"b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644\": container with ID starting with b80dfe84b60a7ae2c8993a73d24a4e10774b9351bde9180f978a90223ebc4644 not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.515147 4644 scope.go:117] "RemoveContainer" containerID="fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.515413 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\": container with ID starting with fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b not found: ID does not exist" containerID="fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.515443 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b"} err="failed to get container status \"fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\": rpc error: code = NotFound desc = could not find container \"fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b\": container with ID starting with fbec68cd4bc3194d930ee7dfbc633af04f208cf8688e330916a0562123c3393b not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.515461 4644 scope.go:117] "RemoveContainer" containerID="884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.515707 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\": container with ID starting with 884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311 not found: ID does not exist" containerID="884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.515733 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311"} err="failed to get container status \"884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\": rpc error: code = NotFound desc = could not find container \"884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311\": container with ID starting with 884927b862681bbcdbec660f6c31232d9737515705964c65439c3c383edfb311 not found: ID does not exist" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.729933 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.730308 4644 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.730527 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.730721 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.730927 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.731110 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.731299 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: I0204 08:45:31.731505 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:31 crc kubenswrapper[4644]: E0204 08:45:31.913460 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Feb 04 08:45:32 crc kubenswrapper[4644]: I0204 08:45:32.666907 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 04 08:45:33 crc kubenswrapper[4644]: E0204 08:45:33.515540 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Feb 04 08:45:36 crc kubenswrapper[4644]: E0204 08:45:36.716980 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="6.4s" Feb 04 08:45:37 crc kubenswrapper[4644]: E0204 08:45:37.281042 4644 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890feba360ddd13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 08:45:29.280855315 +0000 UTC m=+239.320913070,LastTimestamp:2026-02-04 08:45:29.280855315 +0000 UTC m=+239.320913070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.663682 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.664930 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.665475 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.665942 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.666408 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.666857 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:40 crc kubenswrapper[4644]: I0204 08:45:40.667151 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.486931 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.486988 4644 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d" exitCode=1 Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.487023 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d"} Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.487566 4644 scope.go:117] "RemoveContainer" containerID="d6c733f3c147d3d9491eb82d5608a9080e9b5becc19f98a20d7e1c3e6ce7a72d" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.488521 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.488737 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.489126 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.489830 4644 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.490059 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.490264 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.490504 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:42 crc kubenswrapper[4644]: I0204 08:45:42.490727 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: E0204 08:45:43.118932 4644 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="7s" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.499123 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.499177 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0c47a027155269b560a52f5b5338574625c0b5559573ed908acc36168da8cd22"} Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.500926 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.501776 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.502188 4644 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.502640 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.502947 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.503234 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.503576 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.503899 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.659800 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.661237 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.670288 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.670776 4644 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.671143 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.671623 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.672144 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.672548 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.674588 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.688178 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.688220 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:43 crc kubenswrapper[4644]: E0204 08:45:43.688728 4644 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:43 crc kubenswrapper[4644]: I0204 08:45:43.689313 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:43 crc kubenswrapper[4644]: W0204 08:45:43.723284 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-70a0c23d1c0dd3bc8449147befaaa4fb62200b1d7738692c56f37509cfacac5f WatchSource:0}: Error finding container 70a0c23d1c0dd3bc8449147befaaa4fb62200b1d7738692c56f37509cfacac5f: Status 404 returned error can't find the container with id 70a0c23d1c0dd3bc8449147befaaa4fb62200b1d7738692c56f37509cfacac5f Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.465902 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:45:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:45:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:45:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T08:45:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.466350 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.466517 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.466687 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.466845 4644 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.466860 4644 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.511440 4644 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="73805a94d3be2a9f8f46302e9e61cf76f92756052302a1222ca14d51edecaf38" exitCode=0 Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.511509 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"73805a94d3be2a9f8f46302e9e61cf76f92756052302a1222ca14d51edecaf38"} Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.511594 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70a0c23d1c0dd3bc8449147befaaa4fb62200b1d7738692c56f37509cfacac5f"} Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.512059 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.512088 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.512447 4644 status_manager.go:851] "Failed to get status for pod" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" pod="openshift-marketplace/redhat-marketplace-r5qf8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5qf8\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: E0204 08:45:44.512690 4644 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.512845 4644 status_manager.go:851] "Failed to get status for pod" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" pod="openshift-marketplace/marketplace-operator-79b997595-bgkbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-bgkbq\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.513200 4644 status_manager.go:851] "Failed to get status for pod" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.513577 4644 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.513882 4644 status_manager.go:851] "Failed to get status for pod" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" pod="openshift-marketplace/certified-operators-g7zgx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g7zgx\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.514206 4644 status_manager.go:851] "Failed to get status for pod" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" pod="openshift-marketplace/community-operators-d8qh5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d8qh5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.514502 4644 status_manager.go:851] "Failed to get status for pod" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" pod="openshift-marketplace/redhat-operators-8rpn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rpn5\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.514927 4644 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.659054 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:44 crc kubenswrapper[4644]: I0204 08:45:44.659611 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:45 crc kubenswrapper[4644]: E0204 08:45:45.098340 4644 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 04 08:45:45 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32" Netns:"/var/run/netns/9f504e2f-3f26-4579-a752-1ddbb114b01e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:45 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:45 crc kubenswrapper[4644]: > Feb 04 08:45:45 crc kubenswrapper[4644]: E0204 08:45:45.099156 4644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 04 08:45:45 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32" Netns:"/var/run/netns/9f504e2f-3f26-4579-a752-1ddbb114b01e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:45 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:45 crc kubenswrapper[4644]: > pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:45 crc kubenswrapper[4644]: E0204 08:45:45.099191 4644 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 04 08:45:45 crc kubenswrapper[4644]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32" Netns:"/var/run/netns/9f504e2f-3f26-4579-a752-1ddbb114b01e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s": dial tcp 38.102.83.136:6443: connect: connection refused Feb 04 08:45:45 crc kubenswrapper[4644]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 04 08:45:45 crc kubenswrapper[4644]: > pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:45:45 crc kubenswrapper[4644]: E0204 08:45:45.099262 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-nhnlp_openshift-marketplace_42a112d4-2c64-4c7f-a895-a85e29b12d8a_0(2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32): error adding pod openshift-marketplace_marketplace-operator-79b997595-nhnlp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32\\\" Netns:\\\"/var/run/netns/9f504e2f-3f26-4579-a752-1ddbb114b01e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-nhnlp;K8S_POD_INFRA_CONTAINER_ID=2b36f4795d0d536998cc554bbf197a4863d78961871e802a152c5799d3262c32;K8S_POD_UID=42a112d4-2c64-4c7f-a895-a85e29b12d8a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-nhnlp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-nhnlp/42a112d4-2c64-4c7f-a895-a85e29b12d8a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-nhnlp in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-nhnlp?timeout=1m0s\\\": dial tcp 38.102.83.136:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:45:45 crc kubenswrapper[4644]: I0204 08:45:45.524987 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f8a6ee8fb4dcbbb20c721fc55a7fe4618b34162dce4352fa2d04087740e3d257"} Feb 04 08:45:45 crc kubenswrapper[4644]: I0204 08:45:45.525043 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b2ce3e2dd5f1307bc1a3dd9f61cda42c1929e2a639aaa72967b4ed35929259f8"} Feb 04 08:45:45 crc kubenswrapper[4644]: I0204 08:45:45.525057 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dd16d683a2510573094ce97a205605c3d55982bff924e2eed71476f820461daa"} Feb 04 08:45:46 crc kubenswrapper[4644]: I0204 08:45:46.535471 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:46 crc kubenswrapper[4644]: I0204 08:45:46.535498 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:46 crc kubenswrapper[4644]: I0204 08:45:46.535459 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1688381fe566706bda5719df91e5acedf74ef4bfaa89d3b38e7fbeba5ec6d899"} Feb 04 08:45:46 crc kubenswrapper[4644]: I0204 08:45:46.535646 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:46 crc kubenswrapper[4644]: I0204 08:45:46.535669 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b3e31f7efcbd6518346881935b3eded552b50cac5f2593e6d1873282ab5aba09"} Feb 04 08:45:47 crc kubenswrapper[4644]: I0204 08:45:47.051191 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:45:47 crc kubenswrapper[4644]: I0204 08:45:47.058054 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:45:47 crc kubenswrapper[4644]: I0204 08:45:47.540665 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:45:48 crc kubenswrapper[4644]: I0204 08:45:48.690312 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:48 crc kubenswrapper[4644]: I0204 08:45:48.690748 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:48 crc kubenswrapper[4644]: I0204 08:45:48.699851 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:51 crc kubenswrapper[4644]: I0204 08:45:51.549833 4644 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:51 crc kubenswrapper[4644]: I0204 08:45:51.711035 4644 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d91eaa4b-9b7c-4962-b203-5135fdbab66c" Feb 04 08:45:52 crc kubenswrapper[4644]: I0204 08:45:52.566095 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:52 crc kubenswrapper[4644]: I0204 08:45:52.566134 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:52 crc kubenswrapper[4644]: I0204 08:45:52.570742 4644 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d91eaa4b-9b7c-4962-b203-5135fdbab66c" Feb 04 08:45:52 crc kubenswrapper[4644]: I0204 08:45:52.571748 4644 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://dd16d683a2510573094ce97a205605c3d55982bff924e2eed71476f820461daa" Feb 04 08:45:52 crc kubenswrapper[4644]: I0204 08:45:52.571773 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:45:53 crc kubenswrapper[4644]: I0204 08:45:53.570717 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:53 crc kubenswrapper[4644]: I0204 08:45:53.570746 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:45:53 crc kubenswrapper[4644]: I0204 08:45:53.573971 4644 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d91eaa4b-9b7c-4962-b203-5135fdbab66c" Feb 04 08:45:59 crc kubenswrapper[4644]: I0204 08:45:59.345697 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 08:46:00 crc kubenswrapper[4644]: I0204 08:46:00.659628 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:00 crc kubenswrapper[4644]: I0204 08:46:00.667457 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:01 crc kubenswrapper[4644]: W0204 08:46:01.087472 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a112d4_2c64_4c7f_a895_a85e29b12d8a.slice/crio-ed8f5981c2a736624c19e0dc0d08f4cf776ada945b6d201b5b696691d10fb1fc WatchSource:0}: Error finding container ed8f5981c2a736624c19e0dc0d08f4cf776ada945b6d201b5b696691d10fb1fc: Status 404 returned error can't find the container with id ed8f5981c2a736624c19e0dc0d08f4cf776ada945b6d201b5b696691d10fb1fc Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.241111 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.459946 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.569386 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.617890 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/0.log" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.617948 4644 generic.go:334] "Generic (PLEG): container finished" podID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" containerID="af580aa2e575b88c03a8c944f1cfdc6dcfd0bb22a433d809e64a37256b03ffb6" exitCode=1 Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.617980 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" event={"ID":"42a112d4-2c64-4c7f-a895-a85e29b12d8a","Type":"ContainerDied","Data":"af580aa2e575b88c03a8c944f1cfdc6dcfd0bb22a433d809e64a37256b03ffb6"} Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.618009 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" event={"ID":"42a112d4-2c64-4c7f-a895-a85e29b12d8a","Type":"ContainerStarted","Data":"ed8f5981c2a736624c19e0dc0d08f4cf776ada945b6d201b5b696691d10fb1fc"} Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.618480 4644 scope.go:117] "RemoveContainer" containerID="af580aa2e575b88c03a8c944f1cfdc6dcfd0bb22a433d809e64a37256b03ffb6" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.694924 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.847077 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 04 08:46:01 crc kubenswrapper[4644]: I0204 08:46:01.926649 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.487057 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.503793 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.615784 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.629382 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/1.log" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.630574 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/0.log" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.630652 4644 generic.go:334] "Generic (PLEG): container finished" podID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" exitCode=1 Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.630704 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" event={"ID":"42a112d4-2c64-4c7f-a895-a85e29b12d8a","Type":"ContainerDied","Data":"62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037"} Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.630760 4644 scope.go:117] "RemoveContainer" containerID="af580aa2e575b88c03a8c944f1cfdc6dcfd0bb22a433d809e64a37256b03ffb6" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.632121 4644 scope.go:117] "RemoveContainer" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" Feb 04 08:46:02 crc kubenswrapper[4644]: E0204 08:46:02.632552 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:46:02 crc kubenswrapper[4644]: I0204 08:46:02.920245 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.144220 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.187617 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.397907 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.635715 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/1.log" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.660291 4644 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.665413 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.667431 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.66731762 podStartE2EDuration="35.66731762s" podCreationTimestamp="2026-02-04 08:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:45:51.658047574 +0000 UTC m=+261.698105329" watchObservedRunningTime="2026-02-04 08:46:03.66731762 +0000 UTC m=+273.707375405" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.668569 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rpn5","openshift-marketplace/certified-operators-g7zgx","openshift-marketplace/redhat-marketplace-r5qf8","openshift-marketplace/marketplace-operator-79b997595-bgkbq","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-d8qh5"] Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.669133 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.669735 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhnlp"] Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.669399 4644 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.669840 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05bc2b7d-ea50-4938-9c52-1d15d68aba83" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.670448 4644 scope.go:117] "RemoveContainer" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" Feb 04 08:46:03 crc kubenswrapper[4644]: E0204 08:46:03.670985 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.676025 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.717013 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.720981 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.720958175 podStartE2EDuration="12.720958175s" podCreationTimestamp="2026-02-04 08:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:46:03.712511416 +0000 UTC m=+273.752569181" watchObservedRunningTime="2026-02-04 08:46:03.720958175 +0000 UTC m=+273.761015960" Feb 04 08:46:03 crc kubenswrapper[4644]: I0204 08:46:03.880446 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.377051 4644 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.510260 4644 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.524731 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.548025 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.665818 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa7cc3c-1068-4427-a0c2-24e952c5ed2c" path="/var/lib/kubelet/pods/3aa7cc3c-1068-4427-a0c2-24e952c5ed2c/volumes" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.666296 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e01c73-5587-45e5-9a8f-47fedc43d340" path="/var/lib/kubelet/pods/55e01c73-5587-45e5-9a8f-47fedc43d340/volumes" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.666852 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c424c9d-56cc-42b6-95b2-c23ff3ed8846" path="/var/lib/kubelet/pods/8c424c9d-56cc-42b6-95b2-c23ff3ed8846/volumes" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.667804 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b3f648-8d29-4f2c-b3f0-cb29c65133bd" path="/var/lib/kubelet/pods/b6b3f648-8d29-4f2c-b3f0-cb29c65133bd/volumes" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.668320 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95f491a-610d-44ed-ae19-9e5b7ac25f52" path="/var/lib/kubelet/pods/b95f491a-610d-44ed-ae19-9e5b7ac25f52/volumes" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.736911 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.739372 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.749736 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.789464 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.790181 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.929771 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 04 08:46:04 crc kubenswrapper[4644]: I0204 08:46:04.957200 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.098637 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.241798 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.241971 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.242088 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.242363 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.309392 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.346160 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.350210 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.359960 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.382914 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.567306 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.586043 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.618770 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.649717 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.719640 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.795539 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.850789 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.920609 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 04 08:46:05 crc kubenswrapper[4644]: I0204 08:46:05.952120 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.014433 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.020507 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.042788 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.115936 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.136654 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.237471 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.251819 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.410414 4644 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.455360 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.508693 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.517074 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.526530 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.633905 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.715310 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.775801 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.848395 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.865562 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.870310 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 04 08:46:06 crc kubenswrapper[4644]: I0204 08:46:06.892710 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.069806 4644 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.116923 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.172516 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.284566 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.329298 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.359594 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.364638 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.411508 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.415508 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.461587 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.471041 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.497172 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.606018 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.722209 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.827532 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 04 08:46:07 crc kubenswrapper[4644]: I0204 08:46:07.942863 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.010090 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.069000 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.082801 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.149524 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.169059 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.217373 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.393093 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.418863 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.453517 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.483171 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.510501 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.556036 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.630017 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.630104 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.630628 4644 scope.go:117] "RemoveContainer" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" Feb 04 08:46:08 crc kubenswrapper[4644]: E0204 08:46:08.630850 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.667977 4644 scope.go:117] "RemoveContainer" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" Feb 04 08:46:08 crc kubenswrapper[4644]: E0204 08:46:08.668195 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-nhnlp_openshift-marketplace(42a112d4-2c64-4c7f-a895-a85e29b12d8a)\"" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podUID="42a112d4-2c64-4c7f-a895-a85e29b12d8a" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.722969 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.763715 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.786515 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.807995 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.808902 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.838480 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 04 08:46:08 crc kubenswrapper[4644]: I0204 08:46:08.850288 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.012154 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.069237 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.223232 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.263727 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.364353 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.418243 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.444517 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.448917 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.492372 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.600317 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.650998 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.739689 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.835557 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.930870 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.975134 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 04 08:46:09 crc kubenswrapper[4644]: I0204 08:46:09.983156 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.110639 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.122741 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.160303 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.210911 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.260958 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.282046 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.328766 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.331438 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.332598 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.353191 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.364811 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.450237 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.455390 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.498425 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.585005 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.628093 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.643674 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.673844 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.683838 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.814528 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 04 08:46:10 crc kubenswrapper[4644]: I0204 08:46:10.843225 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.088770 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.106172 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.107891 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.116684 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.226173 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.251454 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.346480 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.399838 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.439922 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.456778 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.805647 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.810524 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.837972 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.892165 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.914539 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.948053 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.959474 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 04 08:46:11 crc kubenswrapper[4644]: I0204 08:46:11.989162 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.098881 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.168848 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.170646 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.301414 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.344197 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.345210 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.413386 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.426930 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.482126 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.516963 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.570834 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.587385 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.623805 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.632532 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.694296 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.743707 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.771440 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.864121 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.865130 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.865718 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.901636 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 04 08:46:12 crc kubenswrapper[4644]: I0204 08:46:12.997080 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.034555 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.114758 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.123374 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.138511 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.207521 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.222304 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.388045 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.494367 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.535129 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.644707 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.655088 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.960362 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 04 08:46:13 crc kubenswrapper[4644]: I0204 08:46:13.972159 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.030199 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.074881 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.153637 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.223899 4644 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.224387 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d056f3d01918dd2d04a5271731b5ff74284d753e712f6cc3c9aae3c729e3ac52" gracePeriod=5 Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.277495 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.293509 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.576793 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.653805 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.666076 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.677457 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.786312 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.903674 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.912381 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.952308 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.956432 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 04 08:46:14 crc kubenswrapper[4644]: I0204 08:46:14.996426 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.009552 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.012990 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.042468 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.172724 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.288674 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.345238 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.346591 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.582012 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.658807 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.714825 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 04 08:46:15 crc kubenswrapper[4644]: I0204 08:46:15.813534 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.044970 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.051540 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.073626 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.089450 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.098378 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.137131 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.151693 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.159720 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.257440 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.446303 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.703858 4644 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.837087 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.883011 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.893500 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 04 08:46:16 crc kubenswrapper[4644]: I0204 08:46:16.905173 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.111782 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.251370 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.298050 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.338750 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.368425 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.446638 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.455303 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.473924 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.504521 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 04 08:46:17 crc kubenswrapper[4644]: I0204 08:46:17.605837 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.659370 4644 scope.go:117] "RemoveContainer" containerID="62b246339520e95786afc07f9e42c6decdc8c97dfea26ae8ffcdd8f7106e7037" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.719971 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.720011 4644 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d056f3d01918dd2d04a5271731b5ff74284d753e712f6cc3c9aae3c729e3ac52" exitCode=137 Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.794450 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.794529 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.951860 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.951914 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.951993 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952019 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952058 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952117 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952159 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952180 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952246 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952495 4644 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952512 4644 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952527 4644 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.952539 4644 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 04 08:46:19 crc kubenswrapper[4644]: I0204 08:46:19.970525 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.053619 4644 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.665265 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.665557 4644 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.674719 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.675007 4644 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0da4c68a-7e72-4ccf-a086-fb8043232e4c" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.679295 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.679346 4644 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0da4c68a-7e72-4ccf-a086-fb8043232e4c" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.727506 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/1.log" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.727576 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" event={"ID":"42a112d4-2c64-4c7f-a895-a85e29b12d8a","Type":"ContainerStarted","Data":"a148bf321979bee1e20c48488fc83a7503f6aff39d51af00ab8e556af475d9d1"} Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.728729 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.730854 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.731009 4644 scope.go:117] "RemoveContainer" containerID="d056f3d01918dd2d04a5271731b5ff74284d753e712f6cc3c9aae3c729e3ac52" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.731128 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.733653 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" Feb 04 08:46:20 crc kubenswrapper[4644]: I0204 08:46:20.748786 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nhnlp" podStartSLOduration=52.74876249 podStartE2EDuration="52.74876249s" podCreationTimestamp="2026-02-04 08:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:46:20.7450774 +0000 UTC m=+290.785135155" watchObservedRunningTime="2026-02-04 08:46:20.74876249 +0000 UTC m=+290.788820255" Feb 04 08:46:30 crc kubenswrapper[4644]: I0204 08:46:30.426086 4644 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 04 08:46:31 crc kubenswrapper[4644]: I0204 08:46:31.685614 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 04 08:46:32 crc kubenswrapper[4644]: I0204 08:46:32.041976 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 04 08:46:33 crc kubenswrapper[4644]: I0204 08:46:33.506480 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 04 08:46:36 crc kubenswrapper[4644]: I0204 08:46:36.979575 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 04 08:46:47 crc kubenswrapper[4644]: I0204 08:46:47.071923 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 04 08:46:53 crc kubenswrapper[4644]: I0204 08:46:53.393043 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.460500 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.461242 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" podUID="be892c9a-a311-4937-8c75-71fa5452379a" containerName="controller-manager" containerID="cri-o://bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002" gracePeriod=30 Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.558550 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.558796 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" podUID="32e58256-f013-4739-893b-6d403836f94e" containerName="route-controller-manager" containerID="cri-o://057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27" gracePeriod=30 Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.799517 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.863315 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.894784 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert\") pod \"be892c9a-a311-4937-8c75-71fa5452379a\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.894877 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca\") pod \"be892c9a-a311-4937-8c75-71fa5452379a\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.894900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles\") pod \"be892c9a-a311-4937-8c75-71fa5452379a\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.894928 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7rvk\" (UniqueName: \"kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk\") pod \"be892c9a-a311-4937-8c75-71fa5452379a\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.894951 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config\") pod \"be892c9a-a311-4937-8c75-71fa5452379a\" (UID: \"be892c9a-a311-4937-8c75-71fa5452379a\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.895841 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca" (OuterVolumeSpecName: "client-ca") pod "be892c9a-a311-4937-8c75-71fa5452379a" (UID: "be892c9a-a311-4937-8c75-71fa5452379a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.896342 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be892c9a-a311-4937-8c75-71fa5452379a" (UID: "be892c9a-a311-4937-8c75-71fa5452379a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.900238 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be892c9a-a311-4937-8c75-71fa5452379a" (UID: "be892c9a-a311-4937-8c75-71fa5452379a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.900370 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk" (OuterVolumeSpecName: "kube-api-access-g7rvk") pod "be892c9a-a311-4937-8c75-71fa5452379a" (UID: "be892c9a-a311-4937-8c75-71fa5452379a"). InnerVolumeSpecName "kube-api-access-g7rvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.909436 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config" (OuterVolumeSpecName: "config") pod "be892c9a-a311-4937-8c75-71fa5452379a" (UID: "be892c9a-a311-4937-8c75-71fa5452379a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.995896 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca\") pod \"32e58256-f013-4739-893b-6d403836f94e\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.995981 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config\") pod \"32e58256-f013-4739-893b-6d403836f94e\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996014 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert\") pod \"32e58256-f013-4739-893b-6d403836f94e\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996068 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpvfq\" (UniqueName: \"kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq\") pod \"32e58256-f013-4739-893b-6d403836f94e\" (UID: \"32e58256-f013-4739-893b-6d403836f94e\") " Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996234 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be892c9a-a311-4937-8c75-71fa5452379a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996246 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996255 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996265 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7rvk\" (UniqueName: \"kubernetes.io/projected/be892c9a-a311-4937-8c75-71fa5452379a-kube-api-access-g7rvk\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.996273 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be892c9a-a311-4937-8c75-71fa5452379a-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.997583 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config" (OuterVolumeSpecName: "config") pod "32e58256-f013-4739-893b-6d403836f94e" (UID: "32e58256-f013-4739-893b-6d403836f94e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.998057 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca" (OuterVolumeSpecName: "client-ca") pod "32e58256-f013-4739-893b-6d403836f94e" (UID: "32e58256-f013-4739-893b-6d403836f94e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:12 crc kubenswrapper[4644]: I0204 08:47:12.998980 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq" (OuterVolumeSpecName: "kube-api-access-xpvfq") pod "32e58256-f013-4739-893b-6d403836f94e" (UID: "32e58256-f013-4739-893b-6d403836f94e"). InnerVolumeSpecName "kube-api-access-xpvfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.001032 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32e58256-f013-4739-893b-6d403836f94e" (UID: "32e58256-f013-4739-893b-6d403836f94e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.014967 4644 generic.go:334] "Generic (PLEG): container finished" podID="be892c9a-a311-4937-8c75-71fa5452379a" containerID="bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002" exitCode=0 Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.015018 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" event={"ID":"be892c9a-a311-4937-8c75-71fa5452379a","Type":"ContainerDied","Data":"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002"} Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.015060 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" event={"ID":"be892c9a-a311-4937-8c75-71fa5452379a","Type":"ContainerDied","Data":"f5f92d21cadfee88cfbe7d65e42669536f6c6ada9e4d28b7a424376d16e0152f"} Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.015077 4644 scope.go:117] "RemoveContainer" containerID="bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.015081 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cghxg" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.017904 4644 generic.go:334] "Generic (PLEG): container finished" podID="32e58256-f013-4739-893b-6d403836f94e" containerID="057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27" exitCode=0 Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.017960 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" event={"ID":"32e58256-f013-4739-893b-6d403836f94e","Type":"ContainerDied","Data":"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27"} Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.017999 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" event={"ID":"32e58256-f013-4739-893b-6d403836f94e","Type":"ContainerDied","Data":"06b228065db205b9d5e7ae34c66e8cac9d13d622d2d2e658d22c78f27d53d7d2"} Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.018010 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.034932 4644 scope.go:117] "RemoveContainer" containerID="bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002" Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.035468 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002\": container with ID starting with bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002 not found: ID does not exist" containerID="bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.035512 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002"} err="failed to get container status \"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002\": rpc error: code = NotFound desc = could not find container \"bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002\": container with ID starting with bb599df5e00d9a7418f84804b15c430f9a495bf4ea8e0b6a9dcc2ae117dc5002 not found: ID does not exist" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.035537 4644 scope.go:117] "RemoveContainer" containerID="057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.056960 4644 scope.go:117] "RemoveContainer" containerID="057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27" Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.057450 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27\": container with ID starting with 057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27 not found: ID does not exist" containerID="057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.057570 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27"} err="failed to get container status \"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27\": rpc error: code = NotFound desc = could not find container \"057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27\": container with ID starting with 057c1d108e53545590dbc8749dcd83e5d3c52ffc65319e8153732e1e9df53b27 not found: ID does not exist" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.057993 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.061882 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cghxg"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.076287 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.087108 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4jgmg"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.097920 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpvfq\" (UniqueName: \"kubernetes.io/projected/32e58256-f013-4739-893b-6d403836f94e-kube-api-access-xpvfq\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.097955 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.097991 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58256-f013-4739-893b-6d403836f94e-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.098003 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58256-f013-4739-893b-6d403836f94e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.868779 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.869244 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be892c9a-a311-4937-8c75-71fa5452379a" containerName="controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869275 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="be892c9a-a311-4937-8c75-71fa5452379a" containerName="controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.869317 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869372 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.869402 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e58256-f013-4739-893b-6d403836f94e" containerName="route-controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869422 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e58256-f013-4739-893b-6d403836f94e" containerName="route-controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: E0204 08:47:13.869449 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" containerName="installer" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869466 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" containerName="installer" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869677 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869710 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e58256-f013-4739-893b-6d403836f94e" containerName="route-controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869738 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="967f5ee0-e078-4ed0-83ae-7147dcd7192e" containerName="installer" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.869759 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="be892c9a-a311-4937-8c75-71fa5452379a" containerName="controller-manager" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.870628 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.872944 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.873605 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.880061 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.880695 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.880943 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.881170 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.880707 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.880761 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.882039 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.882142 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.882394 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.882402 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.882608 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.883415 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.893412 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.896446 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:13 crc kubenswrapper[4644]: I0204 08:47:13.898487 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.006950 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.007640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crlz6\" (UniqueName: \"kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.007800 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.007962 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.008106 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.008398 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.008546 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tlw\" (UniqueName: \"kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.008677 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.008804 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.110334 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crlz6\" (UniqueName: \"kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.110595 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.111474 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112500 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112548 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112591 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112630 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4tlw\" (UniqueName: \"kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112665 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.112691 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.113555 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.113889 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.114056 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.114478 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.114837 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.120914 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.120914 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.128148 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crlz6\" (UniqueName: \"kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6\") pod \"route-controller-manager-5949b886dc-grkpw\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.130548 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4tlw\" (UniqueName: \"kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw\") pod \"controller-manager-76ddd7869-md9gm\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.205596 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.215680 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.424904 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.453958 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.665851 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e58256-f013-4739-893b-6d403836f94e" path="/var/lib/kubelet/pods/32e58256-f013-4739-893b-6d403836f94e/volumes" Feb 04 08:47:14 crc kubenswrapper[4644]: I0204 08:47:14.666569 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be892c9a-a311-4937-8c75-71fa5452379a" path="/var/lib/kubelet/pods/be892c9a-a311-4937-8c75-71fa5452379a/volumes" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.029900 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" event={"ID":"37e7f049-aff5-41ae-80e9-19f4a62962bf","Type":"ContainerStarted","Data":"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7"} Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.030346 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" event={"ID":"37e7f049-aff5-41ae-80e9-19f4a62962bf","Type":"ContainerStarted","Data":"206ff9a5d23f8f1a03acc5f744c0d4e2707cd0621c1220b22b23ef85728a64cd"} Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.030684 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.043126 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" event={"ID":"c87f5757-8b7f-42a4-8edf-71aeb72748b1","Type":"ContainerStarted","Data":"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f"} Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.043175 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" event={"ID":"c87f5757-8b7f-42a4-8edf-71aeb72748b1","Type":"ContainerStarted","Data":"fd7b32458794b200133bc8b60f4699ba3143d532bd9179640569c99d16984bcf"} Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.044195 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.048541 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.054305 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" podStartSLOduration=3.054291923 podStartE2EDuration="3.054291923s" podCreationTimestamp="2026-02-04 08:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:15.052912546 +0000 UTC m=+345.092970311" watchObservedRunningTime="2026-02-04 08:47:15.054291923 +0000 UTC m=+345.094349678" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.071425 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" podStartSLOduration=3.071404994 podStartE2EDuration="3.071404994s" podCreationTimestamp="2026-02-04 08:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:15.070875581 +0000 UTC m=+345.110933336" watchObservedRunningTime="2026-02-04 08:47:15.071404994 +0000 UTC m=+345.111462749" Feb 04 08:47:15 crc kubenswrapper[4644]: I0204 08:47:15.209518 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:19 crc kubenswrapper[4644]: I0204 08:47:19.548956 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:19 crc kubenswrapper[4644]: I0204 08:47:19.549462 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" podUID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" containerName="controller-manager" containerID="cri-o://181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f" gracePeriod=30 Feb 04 08:47:19 crc kubenswrapper[4644]: I0204 08:47:19.571015 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:19 crc kubenswrapper[4644]: I0204 08:47:19.571196 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" podUID="37e7f049-aff5-41ae-80e9-19f4a62962bf" containerName="route-controller-manager" containerID="cri-o://e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7" gracePeriod=30 Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.015563 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.020850 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.070075 4644 generic.go:334] "Generic (PLEG): container finished" podID="37e7f049-aff5-41ae-80e9-19f4a62962bf" containerID="e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7" exitCode=0 Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.070155 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" event={"ID":"37e7f049-aff5-41ae-80e9-19f4a62962bf","Type":"ContainerDied","Data":"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7"} Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.070188 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" event={"ID":"37e7f049-aff5-41ae-80e9-19f4a62962bf","Type":"ContainerDied","Data":"206ff9a5d23f8f1a03acc5f744c0d4e2707cd0621c1220b22b23ef85728a64cd"} Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.070208 4644 scope.go:117] "RemoveContainer" containerID="e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.070363 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.072999 4644 generic.go:334] "Generic (PLEG): container finished" podID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" containerID="181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f" exitCode=0 Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.073057 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" event={"ID":"c87f5757-8b7f-42a4-8edf-71aeb72748b1","Type":"ContainerDied","Data":"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f"} Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.073078 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" event={"ID":"c87f5757-8b7f-42a4-8edf-71aeb72748b1","Type":"ContainerDied","Data":"fd7b32458794b200133bc8b60f4699ba3143d532bd9179640569c99d16984bcf"} Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.073163 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76ddd7869-md9gm" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.092307 4644 scope.go:117] "RemoveContainer" containerID="e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7" Feb 04 08:47:20 crc kubenswrapper[4644]: E0204 08:47:20.092763 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7\": container with ID starting with e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7 not found: ID does not exist" containerID="e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.092811 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7"} err="failed to get container status \"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7\": rpc error: code = NotFound desc = could not find container \"e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7\": container with ID starting with e54e617ba0850673cb8edf294022f457831f74dcafaec3cc19ef93037d1b1ae7 not found: ID does not exist" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.092839 4644 scope.go:117] "RemoveContainer" containerID="181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.109297 4644 scope.go:117] "RemoveContainer" containerID="181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f" Feb 04 08:47:20 crc kubenswrapper[4644]: E0204 08:47:20.109865 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f\": container with ID starting with 181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f not found: ID does not exist" containerID="181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.109904 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f"} err="failed to get container status \"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f\": rpc error: code = NotFound desc = could not find container \"181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f\": container with ID starting with 181cbf5d488127516de71c9684596b8e886149daadd799ca0872c781ec58f59f not found: ID does not exist" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190135 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config\") pod \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190178 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert\") pod \"37e7f049-aff5-41ae-80e9-19f4a62962bf\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190197 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config\") pod \"37e7f049-aff5-41ae-80e9-19f4a62962bf\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190212 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles\") pod \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190243 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crlz6\" (UniqueName: \"kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6\") pod \"37e7f049-aff5-41ae-80e9-19f4a62962bf\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190261 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca\") pod \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190282 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert\") pod \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190315 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4tlw\" (UniqueName: \"kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw\") pod \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\" (UID: \"c87f5757-8b7f-42a4-8edf-71aeb72748b1\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190375 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca\") pod \"37e7f049-aff5-41ae-80e9-19f4a62962bf\" (UID: \"37e7f049-aff5-41ae-80e9-19f4a62962bf\") " Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190866 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config" (OuterVolumeSpecName: "config") pod "c87f5757-8b7f-42a4-8edf-71aeb72748b1" (UID: "c87f5757-8b7f-42a4-8edf-71aeb72748b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.190905 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "37e7f049-aff5-41ae-80e9-19f4a62962bf" (UID: "37e7f049-aff5-41ae-80e9-19f4a62962bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.191209 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config" (OuterVolumeSpecName: "config") pod "37e7f049-aff5-41ae-80e9-19f4a62962bf" (UID: "37e7f049-aff5-41ae-80e9-19f4a62962bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.191492 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c87f5757-8b7f-42a4-8edf-71aeb72748b1" (UID: "c87f5757-8b7f-42a4-8edf-71aeb72748b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.191543 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "c87f5757-8b7f-42a4-8edf-71aeb72748b1" (UID: "c87f5757-8b7f-42a4-8edf-71aeb72748b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.195055 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c87f5757-8b7f-42a4-8edf-71aeb72748b1" (UID: "c87f5757-8b7f-42a4-8edf-71aeb72748b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.195212 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37e7f049-aff5-41ae-80e9-19f4a62962bf" (UID: "37e7f049-aff5-41ae-80e9-19f4a62962bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.196115 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw" (OuterVolumeSpecName: "kube-api-access-c4tlw") pod "c87f5757-8b7f-42a4-8edf-71aeb72748b1" (UID: "c87f5757-8b7f-42a4-8edf-71aeb72748b1"). InnerVolumeSpecName "kube-api-access-c4tlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.199429 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6" (OuterVolumeSpecName: "kube-api-access-crlz6") pod "37e7f049-aff5-41ae-80e9-19f4a62962bf" (UID: "37e7f049-aff5-41ae-80e9-19f4a62962bf"). InnerVolumeSpecName "kube-api-access-crlz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292351 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292402 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292417 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37e7f049-aff5-41ae-80e9-19f4a62962bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292432 4644 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292449 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e7f049-aff5-41ae-80e9-19f4a62962bf-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292465 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crlz6\" (UniqueName: \"kubernetes.io/projected/37e7f049-aff5-41ae-80e9-19f4a62962bf-kube-api-access-crlz6\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292480 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87f5757-8b7f-42a4-8edf-71aeb72748b1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292497 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87f5757-8b7f-42a4-8edf-71aeb72748b1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.292512 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4tlw\" (UniqueName: \"kubernetes.io/projected/c87f5757-8b7f-42a4-8edf-71aeb72748b1-kube-api-access-c4tlw\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.410047 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.418267 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949b886dc-grkpw"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.423668 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.427592 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76ddd7869-md9gm"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.672204 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e7f049-aff5-41ae-80e9-19f4a62962bf" path="/var/lib/kubelet/pods/37e7f049-aff5-41ae-80e9-19f4a62962bf/volumes" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.674846 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" path="/var/lib/kubelet/pods/c87f5757-8b7f-42a4-8edf-71aeb72748b1/volumes" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.880573 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5"] Feb 04 08:47:20 crc kubenswrapper[4644]: E0204 08:47:20.880832 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e7f049-aff5-41ae-80e9-19f4a62962bf" containerName="route-controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.880847 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e7f049-aff5-41ae-80e9-19f4a62962bf" containerName="route-controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: E0204 08:47:20.880874 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" containerName="controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.880883 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" containerName="controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.881011 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e7f049-aff5-41ae-80e9-19f4a62962bf" containerName="route-controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.881027 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87f5757-8b7f-42a4-8edf-71aeb72748b1" containerName="controller-manager" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.881679 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.885503 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.887318 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.887825 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.888168 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.888206 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.888576 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.888675 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.890417 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.895240 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5"] Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.896863 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.896894 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.897239 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.897489 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.897647 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.897776 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.897783 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 08:47:20 crc kubenswrapper[4644]: I0204 08:47:20.929246 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002419 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002490 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2fw\" (UniqueName: \"kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002532 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0aa263c-ddd4-437a-9eb4-81e80c30982f-serving-cert\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002615 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzx9\" (UniqueName: \"kubernetes.io/projected/d0aa263c-ddd4-437a-9eb4-81e80c30982f-kube-api-access-6tzx9\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002657 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-config\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002688 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002713 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-client-ca\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.002739 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-proxy-ca-bundles\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104140 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp2fw\" (UniqueName: \"kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104188 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104222 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0aa263c-ddd4-437a-9eb4-81e80c30982f-serving-cert\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104244 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzx9\" (UniqueName: \"kubernetes.io/projected/d0aa263c-ddd4-437a-9eb4-81e80c30982f-kube-api-access-6tzx9\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104285 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-config\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104311 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104355 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-client-ca\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104380 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-proxy-ca-bundles\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.104423 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.106188 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-client-ca\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.106645 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-proxy-ca-bundles\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.107178 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0aa263c-ddd4-437a-9eb4-81e80c30982f-config\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.107272 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.108019 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.110865 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.112193 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0aa263c-ddd4-437a-9eb4-81e80c30982f-serving-cert\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.127923 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp2fw\" (UniqueName: \"kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw\") pod \"route-controller-manager-57fbf49df8-ggnss\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.129495 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzx9\" (UniqueName: \"kubernetes.io/projected/d0aa263c-ddd4-437a-9eb4-81e80c30982f-kube-api-access-6tzx9\") pod \"controller-manager-79b67fb6b8-xlzr5\" (UID: \"d0aa263c-ddd4-437a-9eb4-81e80c30982f\") " pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.209269 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.229855 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.424299 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5"] Feb 04 08:47:21 crc kubenswrapper[4644]: I0204 08:47:21.687530 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:21 crc kubenswrapper[4644]: W0204 08:47:21.693856 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5d933b3_b195_43ca_8fe2_f5cbeee892c3.slice/crio-3396515f8270696741042aee19b476e1b98d1ae0e5b1039c982d758bf948140b WatchSource:0}: Error finding container 3396515f8270696741042aee19b476e1b98d1ae0e5b1039c982d758bf948140b: Status 404 returned error can't find the container with id 3396515f8270696741042aee19b476e1b98d1ae0e5b1039c982d758bf948140b Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.089722 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" event={"ID":"d0aa263c-ddd4-437a-9eb4-81e80c30982f","Type":"ContainerStarted","Data":"26fa177c98e7655ef30efa82c5560a281478be41050f10fb2a028948ca6c1178"} Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.090051 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" event={"ID":"d0aa263c-ddd4-437a-9eb4-81e80c30982f","Type":"ContainerStarted","Data":"ad4b6b535b1416cf75ec93d45a5cabf021f0b07a75a2618d7c7910d77109ca0e"} Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.090290 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.091929 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" event={"ID":"e5d933b3-b195-43ca-8fe2-f5cbeee892c3","Type":"ContainerStarted","Data":"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48"} Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.091961 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" event={"ID":"e5d933b3-b195-43ca-8fe2-f5cbeee892c3","Type":"ContainerStarted","Data":"3396515f8270696741042aee19b476e1b98d1ae0e5b1039c982d758bf948140b"} Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.092290 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.095160 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.106178 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79b67fb6b8-xlzr5" podStartSLOduration=3.106156273 podStartE2EDuration="3.106156273s" podCreationTimestamp="2026-02-04 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:22.104780407 +0000 UTC m=+352.144838172" watchObservedRunningTime="2026-02-04 08:47:22.106156273 +0000 UTC m=+352.146214038" Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.121191 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" podStartSLOduration=3.12116751 podStartE2EDuration="3.12116751s" podCreationTimestamp="2026-02-04 08:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:22.117963924 +0000 UTC m=+352.158021689" watchObservedRunningTime="2026-02-04 08:47:22.12116751 +0000 UTC m=+352.161225265" Feb 04 08:47:22 crc kubenswrapper[4644]: I0204 08:47:22.364241 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.361638 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mpx4z"] Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.363082 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.376747 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mpx4z"] Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399412 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-tls\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399458 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5de14f1f-d607-4898-ad90-c77b23e49bc2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399503 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399532 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-certificates\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399558 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-trusted-ca\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399724 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-bound-sa-token\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399801 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5de14f1f-d607-4898-ad90-c77b23e49bc2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.399826 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6rp\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-kube-api-access-zf6rp\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.429274 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-certificates\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501308 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-trusted-ca\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501369 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-bound-sa-token\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501397 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5de14f1f-d607-4898-ad90-c77b23e49bc2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501412 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6rp\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-kube-api-access-zf6rp\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501445 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-tls\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501460 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5de14f1f-d607-4898-ad90-c77b23e49bc2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.501928 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5de14f1f-d607-4898-ad90-c77b23e49bc2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.502713 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-trusted-ca\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.502865 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-certificates\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.507866 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-registry-tls\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.508423 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5de14f1f-d607-4898-ad90-c77b23e49bc2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.518008 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-bound-sa-token\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.531822 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6rp\" (UniqueName: \"kubernetes.io/projected/5de14f1f-d607-4898-ad90-c77b23e49bc2-kube-api-access-zf6rp\") pod \"image-registry-66df7c8f76-mpx4z\" (UID: \"5de14f1f-d607-4898-ad90-c77b23e49bc2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:28 crc kubenswrapper[4644]: I0204 08:47:28.680777 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:29 crc kubenswrapper[4644]: I0204 08:47:29.153732 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mpx4z"] Feb 04 08:47:30 crc kubenswrapper[4644]: I0204 08:47:30.133867 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" event={"ID":"5de14f1f-d607-4898-ad90-c77b23e49bc2","Type":"ContainerStarted","Data":"e3442ee95460add9496ad375b04ae52db028ceaffe40cca3abef9e9b05452d8e"} Feb 04 08:47:30 crc kubenswrapper[4644]: I0204 08:47:30.133969 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" event={"ID":"5de14f1f-d607-4898-ad90-c77b23e49bc2","Type":"ContainerStarted","Data":"a83dfc19375f63a566ad8930f4465fea5e6e75f665c3779dd54d5df5b1253095"} Feb 04 08:47:30 crc kubenswrapper[4644]: I0204 08:47:30.134037 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.443322 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" podStartSLOduration=4.443303467 podStartE2EDuration="4.443303467s" podCreationTimestamp="2026-02-04 08:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:30.16552723 +0000 UTC m=+360.205585045" watchObservedRunningTime="2026-02-04 08:47:32.443303467 +0000 UTC m=+362.483361222" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.444582 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.444803 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" podUID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" containerName="route-controller-manager" containerID="cri-o://7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48" gracePeriod=30 Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.897293 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.962757 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp2fw\" (UniqueName: \"kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw\") pod \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.962811 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config\") pod \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.962910 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca\") pod \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.962963 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert\") pod \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\" (UID: \"e5d933b3-b195-43ca-8fe2-f5cbeee892c3\") " Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.964103 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5d933b3-b195-43ca-8fe2-f5cbeee892c3" (UID: "e5d933b3-b195-43ca-8fe2-f5cbeee892c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.964621 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config" (OuterVolumeSpecName: "config") pod "e5d933b3-b195-43ca-8fe2-f5cbeee892c3" (UID: "e5d933b3-b195-43ca-8fe2-f5cbeee892c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.972488 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5d933b3-b195-43ca-8fe2-f5cbeee892c3" (UID: "e5d933b3-b195-43ca-8fe2-f5cbeee892c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:47:32 crc kubenswrapper[4644]: I0204 08:47:32.972573 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw" (OuterVolumeSpecName: "kube-api-access-tp2fw") pod "e5d933b3-b195-43ca-8fe2-f5cbeee892c3" (UID: "e5d933b3-b195-43ca-8fe2-f5cbeee892c3"). InnerVolumeSpecName "kube-api-access-tp2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.064473 4644 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.064514 4644 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.064530 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp2fw\" (UniqueName: \"kubernetes.io/projected/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-kube-api-access-tp2fw\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.064541 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d933b3-b195-43ca-8fe2-f5cbeee892c3-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.152966 4644 generic.go:334] "Generic (PLEG): container finished" podID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" containerID="7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48" exitCode=0 Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.153221 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" event={"ID":"e5d933b3-b195-43ca-8fe2-f5cbeee892c3","Type":"ContainerDied","Data":"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48"} Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.153242 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.153263 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss" event={"ID":"e5d933b3-b195-43ca-8fe2-f5cbeee892c3","Type":"ContainerDied","Data":"3396515f8270696741042aee19b476e1b98d1ae0e5b1039c982d758bf948140b"} Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.153288 4644 scope.go:117] "RemoveContainer" containerID="7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.170795 4644 scope.go:117] "RemoveContainer" containerID="7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48" Feb 04 08:47:33 crc kubenswrapper[4644]: E0204 08:47:33.171185 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48\": container with ID starting with 7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48 not found: ID does not exist" containerID="7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.171213 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48"} err="failed to get container status \"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48\": rpc error: code = NotFound desc = could not find container \"7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48\": container with ID starting with 7fb2d81c699eb092f5adbfecd76e39b08d8c82fe8995e538a9c22fcad1b3aa48 not found: ID does not exist" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.194685 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.203804 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-ggnss"] Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.887676 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n"] Feb 04 08:47:33 crc kubenswrapper[4644]: E0204 08:47:33.889228 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" containerName="route-controller-manager" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.889351 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" containerName="route-controller-manager" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.889535 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" containerName="route-controller-manager" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.890060 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.892461 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.892584 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.893654 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.894971 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.895203 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.895276 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.906897 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n"] Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.972068 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnrb\" (UniqueName: \"kubernetes.io/projected/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-kube-api-access-dlnrb\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.972138 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-config\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.972170 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-client-ca\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:33 crc kubenswrapper[4644]: I0204 08:47:33.972192 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-serving-cert\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.073092 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-config\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.073375 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-client-ca\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.073465 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-serving-cert\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.073596 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnrb\" (UniqueName: \"kubernetes.io/projected/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-kube-api-access-dlnrb\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.074158 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-client-ca\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.074952 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-config\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.082247 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-serving-cert\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.097122 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnrb\" (UniqueName: \"kubernetes.io/projected/2a3cb65f-cfc3-4155-80ed-da5dedbae15b-kube-api-access-dlnrb\") pod \"route-controller-manager-799f5886d6-gnq6n\" (UID: \"2a3cb65f-cfc3-4155-80ed-da5dedbae15b\") " pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.209170 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.650586 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n"] Feb 04 08:47:34 crc kubenswrapper[4644]: W0204 08:47:34.654241 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a3cb65f_cfc3_4155_80ed_da5dedbae15b.slice/crio-42990672fdb53e0961e3830e638d856cb3749ec49870ab4b0b0c9889ac95d762 WatchSource:0}: Error finding container 42990672fdb53e0961e3830e638d856cb3749ec49870ab4b0b0c9889ac95d762: Status 404 returned error can't find the container with id 42990672fdb53e0961e3830e638d856cb3749ec49870ab4b0b0c9889ac95d762 Feb 04 08:47:34 crc kubenswrapper[4644]: I0204 08:47:34.669060 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d933b3-b195-43ca-8fe2-f5cbeee892c3" path="/var/lib/kubelet/pods/e5d933b3-b195-43ca-8fe2-f5cbeee892c3/volumes" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.167272 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" event={"ID":"2a3cb65f-cfc3-4155-80ed-da5dedbae15b","Type":"ContainerStarted","Data":"794f5c2f71e3cfa4c605016f921b8e18a585969b9d1a0c6c4712668d2411d4bc"} Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.167672 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" event={"ID":"2a3cb65f-cfc3-4155-80ed-da5dedbae15b","Type":"ContainerStarted","Data":"42990672fdb53e0961e3830e638d856cb3749ec49870ab4b0b0c9889ac95d762"} Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.169202 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.198714 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" podStartSLOduration=3.1986876029999998 podStartE2EDuration="3.198687603s" podCreationTimestamp="2026-02-04 08:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:47:35.193995899 +0000 UTC m=+365.234053694" watchObservedRunningTime="2026-02-04 08:47:35.198687603 +0000 UTC m=+365.238745378" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.325630 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-799f5886d6-gnq6n" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.516997 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nts4j"] Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.544918 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nts4j"] Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.545055 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.551544 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.557202 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.557249 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.693921 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-catalog-content\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.694001 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfw28\" (UniqueName: \"kubernetes.io/projected/0ddce8ec-a47d-4efc-b273-56ec3223320d-kube-api-access-kfw28\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.694031 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-utilities\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.714658 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9k5x"] Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.715611 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.720005 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.735953 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9k5x"] Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795132 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-catalog-content\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795528 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-utilities\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795642 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-catalog-content\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7bm\" (UniqueName: \"kubernetes.io/projected/5b38357a-6348-4d8c-b09d-a06cbdd14739-kube-api-access-df7bm\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795753 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfw28\" (UniqueName: \"kubernetes.io/projected/0ddce8ec-a47d-4efc-b273-56ec3223320d-kube-api-access-kfw28\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795772 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-utilities\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.795965 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-catalog-content\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.796119 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ddce8ec-a47d-4efc-b273-56ec3223320d-utilities\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.814146 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfw28\" (UniqueName: \"kubernetes.io/projected/0ddce8ec-a47d-4efc-b273-56ec3223320d-kube-api-access-kfw28\") pod \"redhat-marketplace-nts4j\" (UID: \"0ddce8ec-a47d-4efc-b273-56ec3223320d\") " pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.871577 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.897984 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-utilities\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.898048 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7bm\" (UniqueName: \"kubernetes.io/projected/5b38357a-6348-4d8c-b09d-a06cbdd14739-kube-api-access-df7bm\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.898111 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-catalog-content\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.898984 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-catalog-content\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.899127 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38357a-6348-4d8c-b09d-a06cbdd14739-utilities\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:35 crc kubenswrapper[4644]: I0204 08:47:35.920880 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7bm\" (UniqueName: \"kubernetes.io/projected/5b38357a-6348-4d8c-b09d-a06cbdd14739-kube-api-access-df7bm\") pod \"redhat-operators-f9k5x\" (UID: \"5b38357a-6348-4d8c-b09d-a06cbdd14739\") " pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:36 crc kubenswrapper[4644]: I0204 08:47:36.037886 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:36 crc kubenswrapper[4644]: I0204 08:47:36.272544 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nts4j"] Feb 04 08:47:36 crc kubenswrapper[4644]: W0204 08:47:36.273987 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ddce8ec_a47d_4efc_b273_56ec3223320d.slice/crio-398d4bcb8eee145bdc6a9bd1de366c1e8b0b9bd766b1f97c82177bd17d35bc5f WatchSource:0}: Error finding container 398d4bcb8eee145bdc6a9bd1de366c1e8b0b9bd766b1f97c82177bd17d35bc5f: Status 404 returned error can't find the container with id 398d4bcb8eee145bdc6a9bd1de366c1e8b0b9bd766b1f97c82177bd17d35bc5f Feb 04 08:47:36 crc kubenswrapper[4644]: I0204 08:47:36.443439 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9k5x"] Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.181409 4644 generic.go:334] "Generic (PLEG): container finished" podID="0ddce8ec-a47d-4efc-b273-56ec3223320d" containerID="37bfc92237edeb2f20da23608f6262cc6dcf1b19a002d89b6a1840234cf200fe" exitCode=0 Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.181468 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nts4j" event={"ID":"0ddce8ec-a47d-4efc-b273-56ec3223320d","Type":"ContainerDied","Data":"37bfc92237edeb2f20da23608f6262cc6dcf1b19a002d89b6a1840234cf200fe"} Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.181511 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nts4j" event={"ID":"0ddce8ec-a47d-4efc-b273-56ec3223320d","Type":"ContainerStarted","Data":"398d4bcb8eee145bdc6a9bd1de366c1e8b0b9bd766b1f97c82177bd17d35bc5f"} Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.183816 4644 generic.go:334] "Generic (PLEG): container finished" podID="5b38357a-6348-4d8c-b09d-a06cbdd14739" containerID="de1e8397244dea73079d2e4979b95586514f9710683b40bbd934fbea76c74e26" exitCode=0 Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.183908 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9k5x" event={"ID":"5b38357a-6348-4d8c-b09d-a06cbdd14739","Type":"ContainerDied","Data":"de1e8397244dea73079d2e4979b95586514f9710683b40bbd934fbea76c74e26"} Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.183978 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9k5x" event={"ID":"5b38357a-6348-4d8c-b09d-a06cbdd14739","Type":"ContainerStarted","Data":"1d6380153007edd9b6e5cd54a11e895c2027c2e09e94a223964ed09777246a4a"} Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.518450 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.519785 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.523106 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.525694 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.624220 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.624269 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cglq\" (UniqueName: \"kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.624305 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.725800 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cglq\" (UniqueName: \"kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.725865 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.725913 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.726571 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.726961 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.748924 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cglq\" (UniqueName: \"kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq\") pod \"certified-operators-hpbsg\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:37 crc kubenswrapper[4644]: I0204 08:47:37.837836 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.190597 4644 generic.go:334] "Generic (PLEG): container finished" podID="0ddce8ec-a47d-4efc-b273-56ec3223320d" containerID="0e9bfa7276b3a06c7da60c42dcf00ae68cbba4803cf0d1fe64771054fd3ec6d8" exitCode=0 Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.190922 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nts4j" event={"ID":"0ddce8ec-a47d-4efc-b273-56ec3223320d","Type":"ContainerDied","Data":"0e9bfa7276b3a06c7da60c42dcf00ae68cbba4803cf0d1fe64771054fd3ec6d8"} Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.197792 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9k5x" event={"ID":"5b38357a-6348-4d8c-b09d-a06cbdd14739","Type":"ContainerStarted","Data":"caa62b6d9fa3a9d07e55dafecd3007ba028c8b5834bcf9933403e435a49c3bdc"} Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.267145 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.510474 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9q6zx"] Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.511426 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.515218 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.530379 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9q6zx"] Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.637524 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9fhg\" (UniqueName: \"kubernetes.io/projected/acca16e8-193f-4e4c-acee-8f1067e6260f-kube-api-access-k9fhg\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.637607 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-catalog-content\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.637672 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-utilities\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.739431 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9fhg\" (UniqueName: \"kubernetes.io/projected/acca16e8-193f-4e4c-acee-8f1067e6260f-kube-api-access-k9fhg\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.739488 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-catalog-content\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.739519 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-utilities\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.739929 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-utilities\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.740222 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acca16e8-193f-4e4c-acee-8f1067e6260f-catalog-content\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.758075 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9fhg\" (UniqueName: \"kubernetes.io/projected/acca16e8-193f-4e4c-acee-8f1067e6260f-kube-api-access-k9fhg\") pod \"community-operators-9q6zx\" (UID: \"acca16e8-193f-4e4c-acee-8f1067e6260f\") " pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:38 crc kubenswrapper[4644]: I0204 08:47:38.835858 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.054305 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9q6zx"] Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.205211 4644 generic.go:334] "Generic (PLEG): container finished" podID="5b38357a-6348-4d8c-b09d-a06cbdd14739" containerID="caa62b6d9fa3a9d07e55dafecd3007ba028c8b5834bcf9933403e435a49c3bdc" exitCode=0 Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.205298 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9k5x" event={"ID":"5b38357a-6348-4d8c-b09d-a06cbdd14739","Type":"ContainerDied","Data":"caa62b6d9fa3a9d07e55dafecd3007ba028c8b5834bcf9933403e435a49c3bdc"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.208062 4644 generic.go:334] "Generic (PLEG): container finished" podID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerID="2ef9e76ef0b511b7294a3f2fc0586a3084cd1ff73e0b600867e69d90ee1a1df3" exitCode=0 Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.208349 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerDied","Data":"2ef9e76ef0b511b7294a3f2fc0586a3084cd1ff73e0b600867e69d90ee1a1df3"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.208386 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerStarted","Data":"afa1a0a78933a0300d6a0f2965e38e032f504852b2fd950cc9364a47e2cceb96"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.211587 4644 generic.go:334] "Generic (PLEG): container finished" podID="acca16e8-193f-4e4c-acee-8f1067e6260f" containerID="d05d323d2c9ef6a0f8c1c0686726655426467db444da6a068a4196a9c82c9d46" exitCode=0 Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.211628 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q6zx" event={"ID":"acca16e8-193f-4e4c-acee-8f1067e6260f","Type":"ContainerDied","Data":"d05d323d2c9ef6a0f8c1c0686726655426467db444da6a068a4196a9c82c9d46"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.211649 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q6zx" event={"ID":"acca16e8-193f-4e4c-acee-8f1067e6260f","Type":"ContainerStarted","Data":"7bcec6e78320b1a55fbb469dd6614fc11f461f5736fa4c3472a6e7f3686e87c2"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.216850 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nts4j" event={"ID":"0ddce8ec-a47d-4efc-b273-56ec3223320d","Type":"ContainerStarted","Data":"fa6fb4231843532de1baa655795bc69eae39c1e8325d2918d23c4abdc0b79e21"} Feb 04 08:47:39 crc kubenswrapper[4644]: I0204 08:47:39.293920 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nts4j" podStartSLOduration=2.9040241780000002 podStartE2EDuration="4.293902822s" podCreationTimestamp="2026-02-04 08:47:35 +0000 UTC" firstStartedPulling="2026-02-04 08:47:37.183563776 +0000 UTC m=+367.223621531" lastFinishedPulling="2026-02-04 08:47:38.57344242 +0000 UTC m=+368.613500175" observedRunningTime="2026-02-04 08:47:39.290314067 +0000 UTC m=+369.330371842" watchObservedRunningTime="2026-02-04 08:47:39.293902822 +0000 UTC m=+369.333960577" Feb 04 08:47:40 crc kubenswrapper[4644]: I0204 08:47:40.253814 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9k5x" event={"ID":"5b38357a-6348-4d8c-b09d-a06cbdd14739","Type":"ContainerStarted","Data":"ed7cfa994504f079c3da8057bea53b17dd35315ee05c71f137ca2890f0e01808"} Feb 04 08:47:40 crc kubenswrapper[4644]: I0204 08:47:40.263181 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerStarted","Data":"08493a6f0f9deffa0c1ac9d3a78153cfa270f19b6d60099f8f4e04bf961166f6"} Feb 04 08:47:40 crc kubenswrapper[4644]: I0204 08:47:40.278933 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9k5x" podStartSLOduration=2.843099709 podStartE2EDuration="5.278918307s" podCreationTimestamp="2026-02-04 08:47:35 +0000 UTC" firstStartedPulling="2026-02-04 08:47:37.185184579 +0000 UTC m=+367.225242334" lastFinishedPulling="2026-02-04 08:47:39.621003177 +0000 UTC m=+369.661060932" observedRunningTime="2026-02-04 08:47:40.276715009 +0000 UTC m=+370.316772764" watchObservedRunningTime="2026-02-04 08:47:40.278918307 +0000 UTC m=+370.318976062" Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.269468 4644 generic.go:334] "Generic (PLEG): container finished" podID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerID="08493a6f0f9deffa0c1ac9d3a78153cfa270f19b6d60099f8f4e04bf961166f6" exitCode=0 Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.269569 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerDied","Data":"08493a6f0f9deffa0c1ac9d3a78153cfa270f19b6d60099f8f4e04bf961166f6"} Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.269859 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerStarted","Data":"af43624fe1bd98f49bd998e55ba46c2bdf32913173006ea3e02858bbd1002520"} Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.272133 4644 generic.go:334] "Generic (PLEG): container finished" podID="acca16e8-193f-4e4c-acee-8f1067e6260f" containerID="c8c58ded4e388925b6d743980cea0d913c5f05f58eeff5c08d5b8bb1bb753f1f" exitCode=0 Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.272174 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q6zx" event={"ID":"acca16e8-193f-4e4c-acee-8f1067e6260f","Type":"ContainerDied","Data":"c8c58ded4e388925b6d743980cea0d913c5f05f58eeff5c08d5b8bb1bb753f1f"} Feb 04 08:47:41 crc kubenswrapper[4644]: I0204 08:47:41.297202 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpbsg" podStartSLOduration=2.826477773 podStartE2EDuration="4.297188392s" podCreationTimestamp="2026-02-04 08:47:37 +0000 UTC" firstStartedPulling="2026-02-04 08:47:39.209121583 +0000 UTC m=+369.249179338" lastFinishedPulling="2026-02-04 08:47:40.679832202 +0000 UTC m=+370.719889957" observedRunningTime="2026-02-04 08:47:41.288934754 +0000 UTC m=+371.328992509" watchObservedRunningTime="2026-02-04 08:47:41.297188392 +0000 UTC m=+371.337246147" Feb 04 08:47:42 crc kubenswrapper[4644]: I0204 08:47:42.279097 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q6zx" event={"ID":"acca16e8-193f-4e4c-acee-8f1067e6260f","Type":"ContainerStarted","Data":"b2879e09c7e998358f4adf060fead6a6acadaf5f90223cb05731d4b1acd9da0d"} Feb 04 08:47:42 crc kubenswrapper[4644]: I0204 08:47:42.301973 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9q6zx" podStartSLOduration=1.865184715 podStartE2EDuration="4.301953358s" podCreationTimestamp="2026-02-04 08:47:38 +0000 UTC" firstStartedPulling="2026-02-04 08:47:39.212713639 +0000 UTC m=+369.252771394" lastFinishedPulling="2026-02-04 08:47:41.649482282 +0000 UTC m=+371.689540037" observedRunningTime="2026-02-04 08:47:42.300957972 +0000 UTC m=+372.341015727" watchObservedRunningTime="2026-02-04 08:47:42.301953358 +0000 UTC m=+372.342011113" Feb 04 08:47:45 crc kubenswrapper[4644]: I0204 08:47:45.871970 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:45 crc kubenswrapper[4644]: I0204 08:47:45.872471 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:45 crc kubenswrapper[4644]: I0204 08:47:45.934387 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:46 crc kubenswrapper[4644]: I0204 08:47:46.038773 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:46 crc kubenswrapper[4644]: I0204 08:47:46.038840 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:46 crc kubenswrapper[4644]: I0204 08:47:46.083907 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:46 crc kubenswrapper[4644]: I0204 08:47:46.352296 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nts4j" Feb 04 08:47:46 crc kubenswrapper[4644]: I0204 08:47:46.353638 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9k5x" Feb 04 08:47:47 crc kubenswrapper[4644]: I0204 08:47:47.838857 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:47 crc kubenswrapper[4644]: I0204 08:47:47.838923 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:47 crc kubenswrapper[4644]: I0204 08:47:47.873039 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.364739 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.688999 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mpx4z" Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.755379 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.836723 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.836769 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:48 crc kubenswrapper[4644]: I0204 08:47:48.886769 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:47:49 crc kubenswrapper[4644]: I0204 08:47:49.371527 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9q6zx" Feb 04 08:48:05 crc kubenswrapper[4644]: I0204 08:48:05.554886 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:48:05 crc kubenswrapper[4644]: I0204 08:48:05.555286 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:48:13 crc kubenswrapper[4644]: I0204 08:48:13.793932 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" podUID="c06497d4-3e16-42df-9c4a-657c3db32510" containerName="registry" containerID="cri-o://8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0" gracePeriod=30 Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.183243 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.315965 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316068 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316128 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkjm5\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316160 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316345 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316386 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316419 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.316443 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token\") pod \"c06497d4-3e16-42df-9c4a-657c3db32510\" (UID: \"c06497d4-3e16-42df-9c4a-657c3db32510\") " Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.317076 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.317200 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.324305 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5" (OuterVolumeSpecName: "kube-api-access-qkjm5") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "kube-api-access-qkjm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.324514 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.326098 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.330659 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.331889 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.338062 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c06497d4-3e16-42df-9c4a-657c3db32510" (UID: "c06497d4-3e16-42df-9c4a-657c3db32510"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418254 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418294 4644 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418304 4644 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c06497d4-3e16-42df-9c4a-657c3db32510-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418315 4644 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418323 4644 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c06497d4-3e16-42df-9c4a-657c3db32510-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418351 4644 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c06497d4-3e16-42df-9c4a-657c3db32510-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.418359 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkjm5\" (UniqueName: \"kubernetes.io/projected/c06497d4-3e16-42df-9c4a-657c3db32510-kube-api-access-qkjm5\") on node \"crc\" DevicePath \"\"" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.456110 4644 generic.go:334] "Generic (PLEG): container finished" podID="c06497d4-3e16-42df-9c4a-657c3db32510" containerID="8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0" exitCode=0 Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.456145 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.456158 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" event={"ID":"c06497d4-3e16-42df-9c4a-657c3db32510","Type":"ContainerDied","Data":"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0"} Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.456186 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9zrhj" event={"ID":"c06497d4-3e16-42df-9c4a-657c3db32510","Type":"ContainerDied","Data":"81c06df3fc110762080c3fe240d9c0946daad0a9ee59eb74406e93f343c4f227"} Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.456239 4644 scope.go:117] "RemoveContainer" containerID="8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.474038 4644 scope.go:117] "RemoveContainer" containerID="8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0" Feb 04 08:48:14 crc kubenswrapper[4644]: E0204 08:48:14.474572 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0\": container with ID starting with 8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0 not found: ID does not exist" containerID="8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.474662 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0"} err="failed to get container status \"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0\": rpc error: code = NotFound desc = could not find container \"8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0\": container with ID starting with 8fa15f46d277aa0b99ec0fde5157be4be48664a75c34fd25af7bbb073d6209d0 not found: ID does not exist" Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.488350 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.494170 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9zrhj"] Feb 04 08:48:14 crc kubenswrapper[4644]: I0204 08:48:14.666887 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06497d4-3e16-42df-9c4a-657c3db32510" path="/var/lib/kubelet/pods/c06497d4-3e16-42df-9c4a-657c3db32510/volumes" Feb 04 08:48:35 crc kubenswrapper[4644]: I0204 08:48:35.554777 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:48:35 crc kubenswrapper[4644]: I0204 08:48:35.555373 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:48:35 crc kubenswrapper[4644]: I0204 08:48:35.555422 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:48:35 crc kubenswrapper[4644]: I0204 08:48:35.556014 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 08:48:35 crc kubenswrapper[4644]: I0204 08:48:35.556075 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a" gracePeriod=600 Feb 04 08:48:36 crc kubenswrapper[4644]: I0204 08:48:36.596808 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a" exitCode=0 Feb 04 08:48:36 crc kubenswrapper[4644]: I0204 08:48:36.596910 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a"} Feb 04 08:48:36 crc kubenswrapper[4644]: I0204 08:48:36.597225 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211"} Feb 04 08:48:36 crc kubenswrapper[4644]: I0204 08:48:36.597262 4644 scope.go:117] "RemoveContainer" containerID="a07ec2b807902ba428dfc45dc103d52e928a6eea20216dfbd5304fafcf020f2e" Feb 04 08:50:30 crc kubenswrapper[4644]: I0204 08:50:30.831172 4644 scope.go:117] "RemoveContainer" containerID="856bc80b2b0c2ae90c222280215d3db3134b6a0b69b08a1a93058b1a808698d6" Feb 04 08:50:35 crc kubenswrapper[4644]: I0204 08:50:35.554712 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:50:35 crc kubenswrapper[4644]: I0204 08:50:35.555026 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:51:05 crc kubenswrapper[4644]: I0204 08:51:05.555752 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:51:05 crc kubenswrapper[4644]: I0204 08:51:05.556407 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:51:30 crc kubenswrapper[4644]: I0204 08:51:30.860285 4644 scope.go:117] "RemoveContainer" containerID="e5a0811975c883b6e650f10fc037ea49beb9a4c9ad4f0d2ced74383cf0e59c06" Feb 04 08:51:30 crc kubenswrapper[4644]: I0204 08:51:30.885431 4644 scope.go:117] "RemoveContainer" containerID="4143c3ba762cb2f6dc15f2881775afc527627aa1268fdf18342fced9094b36d8" Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.555355 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.555658 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.555707 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.556262 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.556357 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211" gracePeriod=600 Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.687023 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211" exitCode=0 Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.687071 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211"} Feb 04 08:51:35 crc kubenswrapper[4644]: I0204 08:51:35.687108 4644 scope.go:117] "RemoveContainer" containerID="340cce72b31584dc37ceeab20e931c4f33579b5072191264e84790d9a3fed77a" Feb 04 08:51:36 crc kubenswrapper[4644]: I0204 08:51:36.695166 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119"} Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.906261 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-27drg"] Feb 04 08:51:38 crc kubenswrapper[4644]: E0204 08:51:38.907043 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06497d4-3e16-42df-9c4a-657c3db32510" containerName="registry" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.907055 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06497d4-3e16-42df-9c4a-657c3db32510" containerName="registry" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.907156 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06497d4-3e16-42df-9c4a-657c3db32510" containerName="registry" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.907547 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.912775 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mpbm5"] Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.914844 4644 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-j527l" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.915075 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.915220 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.920515 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mpbm5" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.922125 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-27drg"] Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.925052 4644 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r6ptl" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.939727 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mpbm5"] Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.957714 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-nk2qt"] Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.958640 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.962393 4644 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-p8wdh" Feb 04 08:51:38 crc kubenswrapper[4644]: I0204 08:51:38.969035 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-nk2qt"] Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.016863 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb9xj\" (UniqueName: \"kubernetes.io/projected/cac9d42c-34be-410d-aca7-2346943b13c6-kube-api-access-nb9xj\") pod \"cert-manager-cainjector-cf98fcc89-27drg\" (UID: \"cac9d42c-34be-410d-aca7-2346943b13c6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.118277 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb9xj\" (UniqueName: \"kubernetes.io/projected/cac9d42c-34be-410d-aca7-2346943b13c6-kube-api-access-nb9xj\") pod \"cert-manager-cainjector-cf98fcc89-27drg\" (UID: \"cac9d42c-34be-410d-aca7-2346943b13c6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.118379 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnthb\" (UniqueName: \"kubernetes.io/projected/ce66b184-f3af-4f9c-b86d-138993d4114b-kube-api-access-bnthb\") pod \"cert-manager-webhook-687f57d79b-nk2qt\" (UID: \"ce66b184-f3af-4f9c-b86d-138993d4114b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.118433 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcn7h\" (UniqueName: \"kubernetes.io/projected/ea2632db-c8cd-42a9-8f74-d989cf9f77a2-kube-api-access-wcn7h\") pod \"cert-manager-858654f9db-mpbm5\" (UID: \"ea2632db-c8cd-42a9-8f74-d989cf9f77a2\") " pod="cert-manager/cert-manager-858654f9db-mpbm5" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.141685 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb9xj\" (UniqueName: \"kubernetes.io/projected/cac9d42c-34be-410d-aca7-2346943b13c6-kube-api-access-nb9xj\") pod \"cert-manager-cainjector-cf98fcc89-27drg\" (UID: \"cac9d42c-34be-410d-aca7-2346943b13c6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.219171 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcn7h\" (UniqueName: \"kubernetes.io/projected/ea2632db-c8cd-42a9-8f74-d989cf9f77a2-kube-api-access-wcn7h\") pod \"cert-manager-858654f9db-mpbm5\" (UID: \"ea2632db-c8cd-42a9-8f74-d989cf9f77a2\") " pod="cert-manager/cert-manager-858654f9db-mpbm5" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.219525 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnthb\" (UniqueName: \"kubernetes.io/projected/ce66b184-f3af-4f9c-b86d-138993d4114b-kube-api-access-bnthb\") pod \"cert-manager-webhook-687f57d79b-nk2qt\" (UID: \"ce66b184-f3af-4f9c-b86d-138993d4114b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.226963 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.235934 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcn7h\" (UniqueName: \"kubernetes.io/projected/ea2632db-c8cd-42a9-8f74-d989cf9f77a2-kube-api-access-wcn7h\") pod \"cert-manager-858654f9db-mpbm5\" (UID: \"ea2632db-c8cd-42a9-8f74-d989cf9f77a2\") " pod="cert-manager/cert-manager-858654f9db-mpbm5" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.237121 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnthb\" (UniqueName: \"kubernetes.io/projected/ce66b184-f3af-4f9c-b86d-138993d4114b-kube-api-access-bnthb\") pod \"cert-manager-webhook-687f57d79b-nk2qt\" (UID: \"ce66b184-f3af-4f9c-b86d-138993d4114b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.239870 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mpbm5" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.279004 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.453577 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-27drg"] Feb 04 08:51:39 crc kubenswrapper[4644]: W0204 08:51:39.458163 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcac9d42c_34be_410d_aca7_2346943b13c6.slice/crio-c8f543b8dc96007cf9a717abc9170abba3447f2c6165e768adcd2f421b019b80 WatchSource:0}: Error finding container c8f543b8dc96007cf9a717abc9170abba3447f2c6165e768adcd2f421b019b80: Status 404 returned error can't find the container with id c8f543b8dc96007cf9a717abc9170abba3447f2c6165e768adcd2f421b019b80 Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.464250 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.605660 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-nk2qt"] Feb 04 08:51:39 crc kubenswrapper[4644]: W0204 08:51:39.614763 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce66b184_f3af_4f9c_b86d_138993d4114b.slice/crio-05992214c09a967b20471a7af88cade57f5c01d37dc8782cfc74b6f86b017bfa WatchSource:0}: Error finding container 05992214c09a967b20471a7af88cade57f5c01d37dc8782cfc74b6f86b017bfa: Status 404 returned error can't find the container with id 05992214c09a967b20471a7af88cade57f5c01d37dc8782cfc74b6f86b017bfa Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.709141 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" event={"ID":"cac9d42c-34be-410d-aca7-2346943b13c6","Type":"ContainerStarted","Data":"c8f543b8dc96007cf9a717abc9170abba3447f2c6165e768adcd2f421b019b80"} Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.710421 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" event={"ID":"ce66b184-f3af-4f9c-b86d-138993d4114b","Type":"ContainerStarted","Data":"05992214c09a967b20471a7af88cade57f5c01d37dc8782cfc74b6f86b017bfa"} Feb 04 08:51:39 crc kubenswrapper[4644]: I0204 08:51:39.740069 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mpbm5"] Feb 04 08:51:39 crc kubenswrapper[4644]: W0204 08:51:39.743392 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea2632db_c8cd_42a9_8f74_d989cf9f77a2.slice/crio-1c286c6d81e9ae8a4d29a50eb83604644055cc6c9553eb665d59c41ebdd50a90 WatchSource:0}: Error finding container 1c286c6d81e9ae8a4d29a50eb83604644055cc6c9553eb665d59c41ebdd50a90: Status 404 returned error can't find the container with id 1c286c6d81e9ae8a4d29a50eb83604644055cc6c9553eb665d59c41ebdd50a90 Feb 04 08:51:40 crc kubenswrapper[4644]: I0204 08:51:40.721504 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mpbm5" event={"ID":"ea2632db-c8cd-42a9-8f74-d989cf9f77a2","Type":"ContainerStarted","Data":"1c286c6d81e9ae8a4d29a50eb83604644055cc6c9553eb665d59c41ebdd50a90"} Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.740059 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mpbm5" event={"ID":"ea2632db-c8cd-42a9-8f74-d989cf9f77a2","Type":"ContainerStarted","Data":"2df00ae022ca56e688761f1794c034a193bb3cebdff8e34c092845e7264eca45"} Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.741782 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" event={"ID":"ce66b184-f3af-4f9c-b86d-138993d4114b","Type":"ContainerStarted","Data":"1cda7c321a9e576f73a43aaaf8988738ed0a2cde0e28a6bf4246508d70e3c813"} Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.741915 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.747129 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" event={"ID":"cac9d42c-34be-410d-aca7-2346943b13c6","Type":"ContainerStarted","Data":"cf9d917e3e5b73b6b39c29a633bd049a584f515ca2795fcec7e16bf371f52499"} Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.765177 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mpbm5" podStartSLOduration=2.35899336 podStartE2EDuration="5.765162322s" podCreationTimestamp="2026-02-04 08:51:38 +0000 UTC" firstStartedPulling="2026-02-04 08:51:39.74554216 +0000 UTC m=+609.785599915" lastFinishedPulling="2026-02-04 08:51:43.151711122 +0000 UTC m=+613.191768877" observedRunningTime="2026-02-04 08:51:43.76471409 +0000 UTC m=+613.804771855" watchObservedRunningTime="2026-02-04 08:51:43.765162322 +0000 UTC m=+613.805220077" Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.781563 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-27drg" podStartSLOduration=2.097105827 podStartE2EDuration="5.781542791s" podCreationTimestamp="2026-02-04 08:51:38 +0000 UTC" firstStartedPulling="2026-02-04 08:51:39.46406884 +0000 UTC m=+609.504126595" lastFinishedPulling="2026-02-04 08:51:43.148505804 +0000 UTC m=+613.188563559" observedRunningTime="2026-02-04 08:51:43.776946435 +0000 UTC m=+613.817004210" watchObservedRunningTime="2026-02-04 08:51:43.781542791 +0000 UTC m=+613.821600546" Feb 04 08:51:43 crc kubenswrapper[4644]: I0204 08:51:43.797093 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" podStartSLOduration=2.214929839 podStartE2EDuration="5.797071745s" podCreationTimestamp="2026-02-04 08:51:38 +0000 UTC" firstStartedPulling="2026-02-04 08:51:39.618123484 +0000 UTC m=+609.658181239" lastFinishedPulling="2026-02-04 08:51:43.20026539 +0000 UTC m=+613.240323145" observedRunningTime="2026-02-04 08:51:43.794203176 +0000 UTC m=+613.834260951" watchObservedRunningTime="2026-02-04 08:51:43.797071745 +0000 UTC m=+613.837129500" Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.670579 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ksbcg"] Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671520 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-controller" containerID="cri-o://003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671600 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="nbdb" containerID="cri-o://ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671644 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-node" containerID="cri-o://a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671702 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-acl-logging" containerID="cri-o://45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671705 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671952 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="northd" containerID="cri-o://c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.671975 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="sbdb" containerID="cri-o://6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" gracePeriod=30 Feb 04 08:51:48 crc kubenswrapper[4644]: I0204 08:51:48.768425 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" containerID="cri-o://140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" gracePeriod=30 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.058755 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/3.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.060960 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovn-acl-logging/0.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.061471 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovn-controller/0.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.061926 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112578 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xzgvb"] Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112825 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112848 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112859 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112866 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112877 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112884 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112894 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-acl-logging" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112902 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-acl-logging" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112912 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="northd" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112919 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="northd" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112930 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112938 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112949 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112958 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112973 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kubecfg-setup" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112982 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kubecfg-setup" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.112991 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="sbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.112998 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="sbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.113010 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-node" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113017 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-node" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.113025 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113033 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.113045 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="nbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113052 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="nbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113166 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113179 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113188 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="sbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113197 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113208 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="northd" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113216 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113226 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="nbdb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113237 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="kube-rbac-proxy-node" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113244 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113254 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovn-acl-logging" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.113378 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113387 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113486 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.113502 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerName="ovnkube-controller" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.115420 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174014 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174065 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174140 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174176 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174252 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174353 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174430 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174462 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log" (OuterVolumeSpecName: "node-log") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174531 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174548 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash" (OuterVolumeSpecName: "host-slash") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174582 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174600 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174840 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174864 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174910 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.174916 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175154 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175233 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175258 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175290 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b6cg\" (UniqueName: \"kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175309 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175365 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175388 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175419 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175447 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175478 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175504 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch\") pod \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\" (UID: \"98b7bb4a-12ca-4851-bf5a-49d38465ec0d\") " Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175534 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175568 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175560 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175666 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-ovn\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175703 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175739 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-netns\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175762 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-script-lib\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175784 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-var-lib-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175804 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-node-log\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175830 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cmj\" (UniqueName: \"kubernetes.io/projected/521012cf-2628-4040-8024-ea825c89ac9f-kube-api-access-s4cmj\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175876 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-systemd-units\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175927 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/521012cf-2628-4040-8024-ea825c89ac9f-ovn-node-metrics-cert\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175950 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175960 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-kubelet\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175981 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket" (OuterVolumeSpecName: "log-socket") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.175982 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176010 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176023 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-slash\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176041 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176046 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-bin\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176067 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176071 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176150 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-systemd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176198 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-config\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176221 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-env-overrides\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176243 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176246 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-netd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176313 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-etc-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176351 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-log-socket\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176407 4644 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176423 4644 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176435 4644 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-log-socket\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176447 4644 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176459 4644 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176470 4644 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176481 4644 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176492 4644 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176503 4644 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176513 4644 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176525 4644 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176537 4644 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176549 4644 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-node-log\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176559 4644 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-slash\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176570 4644 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176581 4644 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.176524 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.180316 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.180564 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg" (OuterVolumeSpecName: "kube-api-access-2b6cg") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "kube-api-access-2b6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.191687 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "98b7bb4a-12ca-4851-bf5a-49d38465ec0d" (UID: "98b7bb4a-12ca-4851-bf5a-49d38465ec0d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277080 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/521012cf-2628-4040-8024-ea825c89ac9f-ovn-node-metrics-cert\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277129 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-kubelet\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277151 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277172 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-slash\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277193 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-bin\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277212 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-systemd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277233 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-config\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277279 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-env-overrides\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277296 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-netd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277370 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-etc-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277401 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-log-socket\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277431 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-ovn\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277454 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277480 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-netns\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277498 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-script-lib\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277515 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-var-lib-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277530 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-node-log\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277550 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cmj\" (UniqueName: \"kubernetes.io/projected/521012cf-2628-4040-8024-ea825c89ac9f-kube-api-access-s4cmj\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277568 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-systemd-units\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277611 4644 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277623 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b6cg\" (UniqueName: \"kubernetes.io/projected/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-kube-api-access-2b6cg\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277633 4644 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277643 4644 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98b7bb4a-12ca-4851-bf5a-49d38465ec0d-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.277678 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-systemd-units\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278041 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-netns\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278136 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-log-socket\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278184 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-ovn\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278149 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-kubelet\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278140 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-var-lib-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278048 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-netd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278246 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-etc-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278255 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278280 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-systemd\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278310 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-slash\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278358 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-cni-bin\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278388 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-run-openvswitch\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278502 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.278590 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/521012cf-2628-4040-8024-ea825c89ac9f-node-log\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.279062 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-script-lib\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.279077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-ovnkube-config\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.280195 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/521012cf-2628-4040-8024-ea825c89ac9f-env-overrides\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.281312 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/521012cf-2628-4040-8024-ea825c89ac9f-ovn-node-metrics-cert\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.281854 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-nk2qt" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.296712 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cmj\" (UniqueName: \"kubernetes.io/projected/521012cf-2628-4040-8024-ea825c89ac9f-kube-api-access-s4cmj\") pod \"ovnkube-node-xzgvb\" (UID: \"521012cf-2628-4040-8024-ea825c89ac9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.428812 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:49 crc kubenswrapper[4644]: W0204 08:51:49.448346 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod521012cf_2628_4040_8024_ea825c89ac9f.slice/crio-fe7634624af13fb6ce9b36e310abc70d1df16ed34ccbccad1fa0148898826659 WatchSource:0}: Error finding container fe7634624af13fb6ce9b36e310abc70d1df16ed34ccbccad1fa0148898826659: Status 404 returned error can't find the container with id fe7634624af13fb6ce9b36e310abc70d1df16ed34ccbccad1fa0148898826659 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.790045 4644 generic.go:334] "Generic (PLEG): container finished" podID="521012cf-2628-4040-8024-ea825c89ac9f" containerID="a8ba802ba09366c8665a6e4317f36f7a5a58e1d76d02276605d1b1c35580602d" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.790175 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerDied","Data":"a8ba802ba09366c8665a6e4317f36f7a5a58e1d76d02276605d1b1c35580602d"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.790222 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"fe7634624af13fb6ce9b36e310abc70d1df16ed34ccbccad1fa0148898826659"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.795104 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/2.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.797059 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/1.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.797284 4644 generic.go:334] "Generic (PLEG): container finished" podID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" containerID="a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4" exitCode=2 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.797946 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerDied","Data":"a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.798268 4644 scope.go:117] "RemoveContainer" containerID="842358f4e1f1cf7051120bc0c0df667d8bfd38ea2aa8c58d059b3cec52839077" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.799077 4644 scope.go:117] "RemoveContainer" containerID="a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4" Feb 04 08:51:49 crc kubenswrapper[4644]: E0204 08:51:49.799724 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mszlj_openshift-multus(7aa20f1c-0ad7-449e-a179-e246a52dfb2a)\"" pod="openshift-multus/multus-mszlj" podUID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.802847 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovnkube-controller/3.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.809592 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovn-acl-logging/0.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.810463 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ksbcg_98b7bb4a-12ca-4851-bf5a-49d38465ec0d/ovn-controller/0.log" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811490 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811530 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811550 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811568 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811584 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811596 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" exitCode=0 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811609 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" exitCode=143 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811623 4644 generic.go:334] "Generic (PLEG): container finished" podID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" exitCode=143 Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811652 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811691 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811713 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811732 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811750 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811795 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811817 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811835 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811846 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811857 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811868 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811877 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811888 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811898 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811907 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811917 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811931 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811946 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811959 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811969 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.811979 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812024 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812034 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812043 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812053 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812063 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812073 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812087 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812103 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812115 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812125 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812136 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812146 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812155 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812166 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812175 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812185 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812195 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812209 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" event={"ID":"98b7bb4a-12ca-4851-bf5a-49d38465ec0d","Type":"ContainerDied","Data":"e03bd3c341ec9b1320cf9d79dfe87ac3db27020751976c47eb039b1142b7033c"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812225 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812238 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812250 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812260 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812269 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812279 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812289 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812298 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812309 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812319 4644 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.812469 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ksbcg" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.882651 4644 scope.go:117] "RemoveContainer" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.917133 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ksbcg"] Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.919520 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.921263 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ksbcg"] Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.937398 4644 scope.go:117] "RemoveContainer" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.953923 4644 scope.go:117] "RemoveContainer" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.971752 4644 scope.go:117] "RemoveContainer" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:49 crc kubenswrapper[4644]: I0204 08:51:49.997053 4644 scope.go:117] "RemoveContainer" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.014188 4644 scope.go:117] "RemoveContainer" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.037809 4644 scope.go:117] "RemoveContainer" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.053364 4644 scope.go:117] "RemoveContainer" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.083429 4644 scope.go:117] "RemoveContainer" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.100214 4644 scope.go:117] "RemoveContainer" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.100731 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": container with ID starting with 140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc not found: ID does not exist" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.100767 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} err="failed to get container status \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": rpc error: code = NotFound desc = could not find container \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": container with ID starting with 140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.100794 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.101054 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": container with ID starting with 6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c not found: ID does not exist" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.101079 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} err="failed to get container status \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": rpc error: code = NotFound desc = could not find container \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": container with ID starting with 6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.101094 4644 scope.go:117] "RemoveContainer" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.101591 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": container with ID starting with 6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708 not found: ID does not exist" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.101617 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} err="failed to get container status \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": rpc error: code = NotFound desc = could not find container \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": container with ID starting with 6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.101634 4644 scope.go:117] "RemoveContainer" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.103065 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": container with ID starting with ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2 not found: ID does not exist" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.103143 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} err="failed to get container status \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": rpc error: code = NotFound desc = could not find container \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": container with ID starting with ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.103202 4644 scope.go:117] "RemoveContainer" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.103817 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": container with ID starting with c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11 not found: ID does not exist" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.103844 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} err="failed to get container status \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": rpc error: code = NotFound desc = could not find container \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": container with ID starting with c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.103862 4644 scope.go:117] "RemoveContainer" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.104141 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": container with ID starting with a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9 not found: ID does not exist" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104184 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} err="failed to get container status \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": rpc error: code = NotFound desc = could not find container \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": container with ID starting with a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104198 4644 scope.go:117] "RemoveContainer" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.104478 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": container with ID starting with a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9 not found: ID does not exist" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104501 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} err="failed to get container status \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": rpc error: code = NotFound desc = could not find container \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": container with ID starting with a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104523 4644 scope.go:117] "RemoveContainer" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.104884 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": container with ID starting with 45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b not found: ID does not exist" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104914 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} err="failed to get container status \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": rpc error: code = NotFound desc = could not find container \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": container with ID starting with 45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.104931 4644 scope.go:117] "RemoveContainer" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.105250 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": container with ID starting with 003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57 not found: ID does not exist" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.105292 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} err="failed to get container status \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": rpc error: code = NotFound desc = could not find container \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": container with ID starting with 003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.105305 4644 scope.go:117] "RemoveContainer" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: E0204 08:51:50.105649 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": container with ID starting with 0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee not found: ID does not exist" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.105673 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} err="failed to get container status \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": rpc error: code = NotFound desc = could not find container \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": container with ID starting with 0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.105688 4644 scope.go:117] "RemoveContainer" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106028 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} err="failed to get container status \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": rpc error: code = NotFound desc = could not find container \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": container with ID starting with 140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106071 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106464 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} err="failed to get container status \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": rpc error: code = NotFound desc = could not find container \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": container with ID starting with 6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106509 4644 scope.go:117] "RemoveContainer" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106815 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} err="failed to get container status \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": rpc error: code = NotFound desc = could not find container \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": container with ID starting with 6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.106854 4644 scope.go:117] "RemoveContainer" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.110541 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} err="failed to get container status \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": rpc error: code = NotFound desc = could not find container \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": container with ID starting with ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.110569 4644 scope.go:117] "RemoveContainer" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.111449 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} err="failed to get container status \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": rpc error: code = NotFound desc = could not find container \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": container with ID starting with c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.111474 4644 scope.go:117] "RemoveContainer" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.112704 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} err="failed to get container status \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": rpc error: code = NotFound desc = could not find container \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": container with ID starting with a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.112776 4644 scope.go:117] "RemoveContainer" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.113457 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} err="failed to get container status \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": rpc error: code = NotFound desc = could not find container \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": container with ID starting with a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.113478 4644 scope.go:117] "RemoveContainer" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.113769 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} err="failed to get container status \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": rpc error: code = NotFound desc = could not find container \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": container with ID starting with 45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.113813 4644 scope.go:117] "RemoveContainer" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114078 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} err="failed to get container status \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": rpc error: code = NotFound desc = could not find container \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": container with ID starting with 003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114099 4644 scope.go:117] "RemoveContainer" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114297 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} err="failed to get container status \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": rpc error: code = NotFound desc = could not find container \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": container with ID starting with 0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114321 4644 scope.go:117] "RemoveContainer" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114553 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} err="failed to get container status \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": rpc error: code = NotFound desc = could not find container \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": container with ID starting with 140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114584 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114789 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} err="failed to get container status \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": rpc error: code = NotFound desc = could not find container \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": container with ID starting with 6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.114805 4644 scope.go:117] "RemoveContainer" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115020 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} err="failed to get container status \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": rpc error: code = NotFound desc = could not find container \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": container with ID starting with 6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115049 4644 scope.go:117] "RemoveContainer" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115246 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} err="failed to get container status \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": rpc error: code = NotFound desc = could not find container \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": container with ID starting with ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115267 4644 scope.go:117] "RemoveContainer" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115478 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} err="failed to get container status \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": rpc error: code = NotFound desc = could not find container \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": container with ID starting with c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115512 4644 scope.go:117] "RemoveContainer" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115738 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} err="failed to get container status \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": rpc error: code = NotFound desc = could not find container \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": container with ID starting with a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115761 4644 scope.go:117] "RemoveContainer" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115939 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} err="failed to get container status \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": rpc error: code = NotFound desc = could not find container \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": container with ID starting with a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.115956 4644 scope.go:117] "RemoveContainer" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116182 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} err="failed to get container status \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": rpc error: code = NotFound desc = could not find container \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": container with ID starting with 45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116198 4644 scope.go:117] "RemoveContainer" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116386 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} err="failed to get container status \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": rpc error: code = NotFound desc = could not find container \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": container with ID starting with 003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116407 4644 scope.go:117] "RemoveContainer" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116674 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} err="failed to get container status \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": rpc error: code = NotFound desc = could not find container \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": container with ID starting with 0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116693 4644 scope.go:117] "RemoveContainer" containerID="140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116884 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc"} err="failed to get container status \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": rpc error: code = NotFound desc = could not find container \"140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc\": container with ID starting with 140e26ed8600b19fb760453470df321f9a2a783b4c94fb750199e788d078e0cc not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.116909 4644 scope.go:117] "RemoveContainer" containerID="6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.117197 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c"} err="failed to get container status \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": rpc error: code = NotFound desc = could not find container \"6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c\": container with ID starting with 6d9a4b15c72edf8ddd0d0378b2640f0cde41e4c1e483cefcc7145408bdd41b3c not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.117410 4644 scope.go:117] "RemoveContainer" containerID="6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.119687 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708"} err="failed to get container status \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": rpc error: code = NotFound desc = could not find container \"6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708\": container with ID starting with 6264716b55282df89e2d15441f5025cb7cd14da0f631b51f3f0754d7b2749708 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.119719 4644 scope.go:117] "RemoveContainer" containerID="ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120100 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2"} err="failed to get container status \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": rpc error: code = NotFound desc = could not find container \"ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2\": container with ID starting with ae2d221cf17bb8d8eb075ce10f4ab352cf9838b5bd010ce892ee95e022dd49a2 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120119 4644 scope.go:117] "RemoveContainer" containerID="c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120393 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11"} err="failed to get container status \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": rpc error: code = NotFound desc = could not find container \"c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11\": container with ID starting with c3516f9b4c6d2194632acf59238853e1e7576235e3de6518427bd6a2e1a08e11 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120412 4644 scope.go:117] "RemoveContainer" containerID="a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120723 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9"} err="failed to get container status \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": rpc error: code = NotFound desc = could not find container \"a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9\": container with ID starting with a60c4a52c07e6c65b667b60ef4b0ee15faf7a1f30a71f61a15dfe1440c768ef9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120738 4644 scope.go:117] "RemoveContainer" containerID="a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120925 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9"} err="failed to get container status \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": rpc error: code = NotFound desc = could not find container \"a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9\": container with ID starting with a39b2aafa841ae02d8fa47aa97826312ec7ae2f6d2bc4ae52735f38f077155e9 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.120938 4644 scope.go:117] "RemoveContainer" containerID="45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.121086 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b"} err="failed to get container status \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": rpc error: code = NotFound desc = could not find container \"45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b\": container with ID starting with 45dc1b72d35c26608ae4daaeddb7bdc37b1c2237b3364aefeda6f1f88b556d4b not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.121103 4644 scope.go:117] "RemoveContainer" containerID="003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.121248 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57"} err="failed to get container status \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": rpc error: code = NotFound desc = could not find container \"003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57\": container with ID starting with 003ec8fea68afea03a4103b6febfedb0a0b8227970781d5eebd95c591fea9a57 not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.121267 4644 scope.go:117] "RemoveContainer" containerID="0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.121491 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee"} err="failed to get container status \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": rpc error: code = NotFound desc = could not find container \"0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee\": container with ID starting with 0a5cbbc92c493d4ab789682826fcb9b2b956fcf44d31d8958ac19123edf798ee not found: ID does not exist" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.666913 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b7bb4a-12ca-4851-bf5a-49d38465ec0d" path="/var/lib/kubelet/pods/98b7bb4a-12ca-4851-bf5a-49d38465ec0d/volumes" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.818811 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/2.log" Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824369 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"d38167506b3b92be096ce3563c14898c7bfc84d064245777e7728de447176e19"} Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824409 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"c6fee90709aa5f5be1201d5b095c4e9366d8f0356a0c046e56f72c6e524582e2"} Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824423 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"adab7e23aa9c338d42f042c5c497d78dd8039d04a0c60e8476d2fa8a8759dc40"} Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824432 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"1b5c72a60adc06dc1232d02b809381759b236298937c851fe8eed6786a175bda"} Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824443 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"c3c3bec8daf5b7ddb42d62144e30e56737a22d95908cd792171525d5d56d3bcc"} Feb 04 08:51:50 crc kubenswrapper[4644]: I0204 08:51:50.824453 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"6ba100e3e44b66a03c87c13ad74818b098e8900acf52512cb3b0980099eed37c"} Feb 04 08:51:52 crc kubenswrapper[4644]: I0204 08:51:52.849639 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"83b5a80e12132da70fd3730db82c3fd9f47b912ea5a41e9beffe18e7416161c8"} Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.909388 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" event={"ID":"521012cf-2628-4040-8024-ea825c89ac9f","Type":"ContainerStarted","Data":"c4a2dbe7f03dcdfe0c7a33cbb10cee64478e12236b18cf7324260cbcd6824a02"} Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.910437 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.910458 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.910469 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.950336 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.951601 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:51:55 crc kubenswrapper[4644]: I0204 08:51:55.954735 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" podStartSLOduration=6.954721557 podStartE2EDuration="6.954721557s" podCreationTimestamp="2026-02-04 08:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:51:55.951079598 +0000 UTC m=+625.991137393" watchObservedRunningTime="2026-02-04 08:51:55.954721557 +0000 UTC m=+625.994779312" Feb 04 08:52:02 crc kubenswrapper[4644]: I0204 08:52:02.659620 4644 scope.go:117] "RemoveContainer" containerID="a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4" Feb 04 08:52:02 crc kubenswrapper[4644]: E0204 08:52:02.660468 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mszlj_openshift-multus(7aa20f1c-0ad7-449e-a179-e246a52dfb2a)\"" pod="openshift-multus/multus-mszlj" podUID="7aa20f1c-0ad7-449e-a179-e246a52dfb2a" Feb 04 08:52:15 crc kubenswrapper[4644]: I0204 08:52:15.660287 4644 scope.go:117] "RemoveContainer" containerID="a41c3d38db21d832941edef1eb09df8ed99a05a9e997b6cdd401a44230fcd4f4" Feb 04 08:52:16 crc kubenswrapper[4644]: I0204 08:52:16.023241 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mszlj_7aa20f1c-0ad7-449e-a179-e246a52dfb2a/kube-multus/2.log" Feb 04 08:52:16 crc kubenswrapper[4644]: I0204 08:52:16.023668 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mszlj" event={"ID":"7aa20f1c-0ad7-449e-a179-e246a52dfb2a","Type":"ContainerStarted","Data":"6c5872820db40b1c623376eca16838e44778fb749d373da25d2b88ce75cb083e"} Feb 04 08:52:19 crc kubenswrapper[4644]: I0204 08:52:19.457001 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xzgvb" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.442978 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd"] Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.444582 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.460020 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.467041 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.467146 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.467167 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j78bl\" (UniqueName: \"kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.467818 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd"] Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.568101 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.568179 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.568202 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j78bl\" (UniqueName: \"kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.568882 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.568907 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.584580 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j78bl\" (UniqueName: \"kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:28 crc kubenswrapper[4644]: I0204 08:52:28.768972 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:29 crc kubenswrapper[4644]: I0204 08:52:29.273900 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd"] Feb 04 08:52:30 crc kubenswrapper[4644]: I0204 08:52:30.104143 4644 generic.go:334] "Generic (PLEG): container finished" podID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerID="131fae25d7126f34e50bd7104742aaa117eab11060846d312b0d03cf8add8a0f" exitCode=0 Feb 04 08:52:30 crc kubenswrapper[4644]: I0204 08:52:30.104216 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" event={"ID":"c161aa28-3e38-4cd4-9b44-29cffcdf6c81","Type":"ContainerDied","Data":"131fae25d7126f34e50bd7104742aaa117eab11060846d312b0d03cf8add8a0f"} Feb 04 08:52:30 crc kubenswrapper[4644]: I0204 08:52:30.104623 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" event={"ID":"c161aa28-3e38-4cd4-9b44-29cffcdf6c81","Type":"ContainerStarted","Data":"3d93196c481918740b74beb7f8f6ff8e3d0a43e3774e9c1397cffcae024962df"} Feb 04 08:52:32 crc kubenswrapper[4644]: I0204 08:52:32.121506 4644 generic.go:334] "Generic (PLEG): container finished" podID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerID="c7a975a69d2dda35d2caa54fd613b175b892db1dc46d11aefe89ec8348942e02" exitCode=0 Feb 04 08:52:32 crc kubenswrapper[4644]: I0204 08:52:32.121558 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" event={"ID":"c161aa28-3e38-4cd4-9b44-29cffcdf6c81","Type":"ContainerDied","Data":"c7a975a69d2dda35d2caa54fd613b175b892db1dc46d11aefe89ec8348942e02"} Feb 04 08:52:33 crc kubenswrapper[4644]: I0204 08:52:33.130764 4644 generic.go:334] "Generic (PLEG): container finished" podID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerID="3d61ae568be6475fffc53843fc5cda43065e9d63bd1ff6102830df28a3baa72c" exitCode=0 Feb 04 08:52:33 crc kubenswrapper[4644]: I0204 08:52:33.130802 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" event={"ID":"c161aa28-3e38-4cd4-9b44-29cffcdf6c81","Type":"ContainerDied","Data":"3d61ae568be6475fffc53843fc5cda43065e9d63bd1ff6102830df28a3baa72c"} Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.388166 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.441427 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle\") pod \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.441507 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j78bl\" (UniqueName: \"kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl\") pod \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.441564 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util\") pod \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\" (UID: \"c161aa28-3e38-4cd4-9b44-29cffcdf6c81\") " Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.442290 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle" (OuterVolumeSpecName: "bundle") pod "c161aa28-3e38-4cd4-9b44-29cffcdf6c81" (UID: "c161aa28-3e38-4cd4-9b44-29cffcdf6c81"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.449505 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl" (OuterVolumeSpecName: "kube-api-access-j78bl") pod "c161aa28-3e38-4cd4-9b44-29cffcdf6c81" (UID: "c161aa28-3e38-4cd4-9b44-29cffcdf6c81"). InnerVolumeSpecName "kube-api-access-j78bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.456358 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util" (OuterVolumeSpecName: "util") pod "c161aa28-3e38-4cd4-9b44-29cffcdf6c81" (UID: "c161aa28-3e38-4cd4-9b44-29cffcdf6c81"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.542678 4644 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-util\") on node \"crc\" DevicePath \"\"" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.542739 4644 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:52:34 crc kubenswrapper[4644]: I0204 08:52:34.542760 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j78bl\" (UniqueName: \"kubernetes.io/projected/c161aa28-3e38-4cd4-9b44-29cffcdf6c81-kube-api-access-j78bl\") on node \"crc\" DevicePath \"\"" Feb 04 08:52:35 crc kubenswrapper[4644]: I0204 08:52:35.143430 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" event={"ID":"c161aa28-3e38-4cd4-9b44-29cffcdf6c81","Type":"ContainerDied","Data":"3d93196c481918740b74beb7f8f6ff8e3d0a43e3774e9c1397cffcae024962df"} Feb 04 08:52:35 crc kubenswrapper[4644]: I0204 08:52:35.143471 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd" Feb 04 08:52:35 crc kubenswrapper[4644]: I0204 08:52:35.143482 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d93196c481918740b74beb7f8f6ff8e3d0a43e3774e9c1397cffcae024962df" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.296291 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-w2rnn"] Feb 04 08:52:37 crc kubenswrapper[4644]: E0204 08:52:37.296720 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="util" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.296731 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="util" Feb 04 08:52:37 crc kubenswrapper[4644]: E0204 08:52:37.296742 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="extract" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.296748 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="extract" Feb 04 08:52:37 crc kubenswrapper[4644]: E0204 08:52:37.296763 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="pull" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.296769 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="pull" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.296848 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c161aa28-3e38-4cd4-9b44-29cffcdf6c81" containerName="extract" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.297271 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.301392 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.301429 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.301449 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jzmpq" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.315839 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-w2rnn"] Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.484836 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wmb\" (UniqueName: \"kubernetes.io/projected/292e6d27-c5ff-4352-a25e-a8b40030e9e2-kube-api-access-k2wmb\") pod \"nmstate-operator-57bf49857b-w2rnn\" (UID: \"292e6d27-c5ff-4352-a25e-a8b40030e9e2\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.585845 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wmb\" (UniqueName: \"kubernetes.io/projected/292e6d27-c5ff-4352-a25e-a8b40030e9e2-kube-api-access-k2wmb\") pod \"nmstate-operator-57bf49857b-w2rnn\" (UID: \"292e6d27-c5ff-4352-a25e-a8b40030e9e2\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.605110 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wmb\" (UniqueName: \"kubernetes.io/projected/292e6d27-c5ff-4352-a25e-a8b40030e9e2-kube-api-access-k2wmb\") pod \"nmstate-operator-57bf49857b-w2rnn\" (UID: \"292e6d27-c5ff-4352-a25e-a8b40030e9e2\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" Feb 04 08:52:37 crc kubenswrapper[4644]: I0204 08:52:37.612621 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" Feb 04 08:52:38 crc kubenswrapper[4644]: I0204 08:52:38.082390 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-w2rnn"] Feb 04 08:52:38 crc kubenswrapper[4644]: I0204 08:52:38.161585 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" event={"ID":"292e6d27-c5ff-4352-a25e-a8b40030e9e2","Type":"ContainerStarted","Data":"76004e241bc5d80d8be4137d770bb39e39046817096af2c359830ed0c6041811"} Feb 04 08:52:41 crc kubenswrapper[4644]: I0204 08:52:41.181073 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" event={"ID":"292e6d27-c5ff-4352-a25e-a8b40030e9e2","Type":"ContainerStarted","Data":"312e2d0fc3ad1674fd828d538981bb2e889105ea68f0c386420ad82e4d55eb61"} Feb 04 08:52:41 crc kubenswrapper[4644]: I0204 08:52:41.205708 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-57bf49857b-w2rnn" podStartSLOduration=2.2305445600000002 podStartE2EDuration="4.20568814s" podCreationTimestamp="2026-02-04 08:52:37 +0000 UTC" firstStartedPulling="2026-02-04 08:52:38.0924252 +0000 UTC m=+668.132482955" lastFinishedPulling="2026-02-04 08:52:40.06756878 +0000 UTC m=+670.107626535" observedRunningTime="2026-02-04 08:52:41.203678445 +0000 UTC m=+671.243736210" watchObservedRunningTime="2026-02-04 08:52:41.20568814 +0000 UTC m=+671.245745915" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.236416 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-q44mg"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.237202 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.239543 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsz6t\" (UniqueName: \"kubernetes.io/projected/bf7f3412-56f2-4b59-bd63-86f748e1d27f-kube-api-access-qsz6t\") pod \"nmstate-metrics-677949fd65-q44mg\" (UID: \"bf7f3412-56f2-4b59-bd63-86f748e1d27f\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.242382 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rkzdt" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.259019 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-q44mg"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.265633 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.266351 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.268489 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.271001 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q22qq"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.273375 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.301163 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.340769 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-ovs-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.340855 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.340877 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-dbus-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.340938 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-nmstate-lock\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.340989 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vqb\" (UniqueName: \"kubernetes.io/projected/2e070277-6ff5-41d0-ade7-81a146232b83-kube-api-access-77vqb\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.341060 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/736f2cd3-420f-4c26-91ad-acd900c9fa01-kube-api-access-lqlkl\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.341095 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsz6t\" (UniqueName: \"kubernetes.io/projected/bf7f3412-56f2-4b59-bd63-86f748e1d27f-kube-api-access-qsz6t\") pod \"nmstate-metrics-677949fd65-q44mg\" (UID: \"bf7f3412-56f2-4b59-bd63-86f748e1d27f\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.378865 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsz6t\" (UniqueName: \"kubernetes.io/projected/bf7f3412-56f2-4b59-bd63-86f748e1d27f-kube-api-access-qsz6t\") pod \"nmstate-metrics-677949fd65-q44mg\" (UID: \"bf7f3412-56f2-4b59-bd63-86f748e1d27f\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442377 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vqb\" (UniqueName: \"kubernetes.io/projected/2e070277-6ff5-41d0-ade7-81a146232b83-kube-api-access-77vqb\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442451 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/736f2cd3-420f-4c26-91ad-acd900c9fa01-kube-api-access-lqlkl\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442483 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-ovs-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442520 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442539 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-dbus-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442574 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-nmstate-lock\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442641 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-nmstate-lock\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: E0204 08:52:42.442783 4644 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 04 08:52:42 crc kubenswrapper[4644]: E0204 08:52:42.442836 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair podName:2e070277-6ff5-41d0-ade7-81a146232b83 nodeName:}" failed. No retries permitted until 2026-02-04 08:52:42.942817271 +0000 UTC m=+672.982875026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair") pod "nmstate-webhook-bd5678b45-bzsxp" (UID: "2e070277-6ff5-41d0-ade7-81a146232b83") : secret "openshift-nmstate-webhook" not found Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.442897 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-ovs-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.443083 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/736f2cd3-420f-4c26-91ad-acd900c9fa01-dbus-socket\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.461090 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.462238 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.472748 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-b49qw" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.473210 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.473620 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.485525 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vqb\" (UniqueName: \"kubernetes.io/projected/2e070277-6ff5-41d0-ade7-81a146232b83-kube-api-access-77vqb\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.486686 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/736f2cd3-420f-4c26-91ad-acd900c9fa01-kube-api-access-lqlkl\") pod \"nmstate-handler-q22qq\" (UID: \"736f2cd3-420f-4c26-91ad-acd900c9fa01\") " pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.489586 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.555906 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.556428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.556502 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72qw\" (UniqueName: \"kubernetes.io/projected/19ba6f84-da44-468a-bf88-2d5861308d59-kube-api-access-z72qw\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.556537 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/19ba6f84-da44-468a-bf88-2d5861308d59-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.613616 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.657861 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/19ba6f84-da44-468a-bf88-2d5861308d59-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.657918 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.658030 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72qw\" (UniqueName: \"kubernetes.io/projected/19ba6f84-da44-468a-bf88-2d5861308d59-kube-api-access-z72qw\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: E0204 08:52:42.658506 4644 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 04 08:52:42 crc kubenswrapper[4644]: E0204 08:52:42.658558 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert podName:19ba6f84-da44-468a-bf88-2d5861308d59 nodeName:}" failed. No retries permitted until 2026-02-04 08:52:43.158542132 +0000 UTC m=+673.198599887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert") pod "nmstate-console-plugin-6f874f9768-mhn4n" (UID: "19ba6f84-da44-468a-bf88-2d5861308d59") : secret "plugin-serving-cert" not found Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.658694 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/19ba6f84-da44-468a-bf88-2d5861308d59-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.680496 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72qw\" (UniqueName: \"kubernetes.io/projected/19ba6f84-da44-468a-bf88-2d5861308d59-kube-api-access-z72qw\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.703475 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74d86bd554-k2fg7"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.704126 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.721160 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74d86bd554-k2fg7"] Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862194 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shh49\" (UniqueName: \"kubernetes.io/projected/7381947d-88c0-4f78-a009-a07118ddb3c0-kube-api-access-shh49\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862247 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-oauth-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862300 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-console-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862336 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-oauth-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862363 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-service-ca\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862386 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.862410 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-trusted-ca-bundle\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963132 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-service-ca\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963179 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963205 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-trusted-ca-bundle\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963254 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963270 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shh49\" (UniqueName: \"kubernetes.io/projected/7381947d-88c0-4f78-a009-a07118ddb3c0-kube-api-access-shh49\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963292 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-oauth-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963511 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-console-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.963598 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-oauth-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.964041 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-service-ca\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.964648 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-console-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.964788 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-oauth-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.964826 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7381947d-88c0-4f78-a009-a07118ddb3c0-trusted-ca-bundle\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.968227 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2e070277-6ff5-41d0-ade7-81a146232b83-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-bzsxp\" (UID: \"2e070277-6ff5-41d0-ade7-81a146232b83\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.968286 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-oauth-config\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.968828 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7381947d-88c0-4f78-a009-a07118ddb3c0-console-serving-cert\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:42 crc kubenswrapper[4644]: I0204 08:52:42.984511 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shh49\" (UniqueName: \"kubernetes.io/projected/7381947d-88c0-4f78-a009-a07118ddb3c0-kube-api-access-shh49\") pod \"console-74d86bd554-k2fg7\" (UID: \"7381947d-88c0-4f78-a009-a07118ddb3c0\") " pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.022120 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.083363 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-q44mg"] Feb 04 08:52:43 crc kubenswrapper[4644]: W0204 08:52:43.090513 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf7f3412_56f2_4b59_bd63_86f748e1d27f.slice/crio-66390b3b2ec022c5da6bd1233bf0a5b05dbed74e95b41030ca96680c4da2d423 WatchSource:0}: Error finding container 66390b3b2ec022c5da6bd1233bf0a5b05dbed74e95b41030ca96680c4da2d423: Status 404 returned error can't find the container with id 66390b3b2ec022c5da6bd1233bf0a5b05dbed74e95b41030ca96680c4da2d423 Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.167575 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.172300 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/19ba6f84-da44-468a-bf88-2d5861308d59-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-mhn4n\" (UID: \"19ba6f84-da44-468a-bf88-2d5861308d59\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.192969 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q22qq" event={"ID":"736f2cd3-420f-4c26-91ad-acd900c9fa01","Type":"ContainerStarted","Data":"89a5896fc5be7b9e5d2165dc2ae8bf614b78ced9c15fb0790e8285ff6a8eb190"} Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.193990 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" event={"ID":"bf7f3412-56f2-4b59-bd63-86f748e1d27f","Type":"ContainerStarted","Data":"66390b3b2ec022c5da6bd1233bf0a5b05dbed74e95b41030ca96680c4da2d423"} Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.202970 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.216553 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74d86bd554-k2fg7"] Feb 04 08:52:43 crc kubenswrapper[4644]: W0204 08:52:43.224710 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7381947d_88c0_4f78_a009_a07118ddb3c0.slice/crio-d7662eb9915500c2744a29384c6581eb64806c64cdb98b95bc6e501c04d1603c WatchSource:0}: Error finding container d7662eb9915500c2744a29384c6581eb64806c64cdb98b95bc6e501c04d1603c: Status 404 returned error can't find the container with id d7662eb9915500c2744a29384c6581eb64806c64cdb98b95bc6e501c04d1603c Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.376636 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp"] Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.386054 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" Feb 04 08:52:43 crc kubenswrapper[4644]: I0204 08:52:43.609364 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n"] Feb 04 08:52:44 crc kubenswrapper[4644]: I0204 08:52:44.201252 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74d86bd554-k2fg7" event={"ID":"7381947d-88c0-4f78-a009-a07118ddb3c0","Type":"ContainerStarted","Data":"d61fe939acc2a490513e52a510a9cab66d0f34655bbb6c06f94d6ef96161b63d"} Feb 04 08:52:44 crc kubenswrapper[4644]: I0204 08:52:44.201689 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74d86bd554-k2fg7" event={"ID":"7381947d-88c0-4f78-a009-a07118ddb3c0","Type":"ContainerStarted","Data":"d7662eb9915500c2744a29384c6581eb64806c64cdb98b95bc6e501c04d1603c"} Feb 04 08:52:44 crc kubenswrapper[4644]: I0204 08:52:44.203551 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" event={"ID":"19ba6f84-da44-468a-bf88-2d5861308d59","Type":"ContainerStarted","Data":"0c7c3bf750fc01d33c8c15156468cf9a843fafa8c7e852843b0d7c9788277a0b"} Feb 04 08:52:44 crc kubenswrapper[4644]: I0204 08:52:44.206752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" event={"ID":"2e070277-6ff5-41d0-ade7-81a146232b83","Type":"ContainerStarted","Data":"cb11a5378018512bffe12c4a754099e6f793f5166fbf37cfe65104c2bdf88f13"} Feb 04 08:52:44 crc kubenswrapper[4644]: I0204 08:52:44.226924 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74d86bd554-k2fg7" podStartSLOduration=2.226904493 podStartE2EDuration="2.226904493s" podCreationTimestamp="2026-02-04 08:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:52:44.223628063 +0000 UTC m=+674.263685828" watchObservedRunningTime="2026-02-04 08:52:44.226904493 +0000 UTC m=+674.266962238" Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.219711 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" event={"ID":"2e070277-6ff5-41d0-ade7-81a146232b83","Type":"ContainerStarted","Data":"329584c84870567bb5a0a56eaa82adcbe1b634051fcfb6ee939cd138626df69b"} Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.220255 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.222296 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" event={"ID":"bf7f3412-56f2-4b59-bd63-86f748e1d27f","Type":"ContainerStarted","Data":"9db012b1f701c0dafafeacf07680a287577080d308a31e4c4a4f58f9e02a62ab"} Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.234995 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q22qq" event={"ID":"736f2cd3-420f-4c26-91ad-acd900c9fa01","Type":"ContainerStarted","Data":"266df67717d821c0b003156c6f255538d73bde94e021152dfd439e17a48d6718"} Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.235693 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.264614 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" podStartSLOduration=2.5182717329999997 podStartE2EDuration="4.264589481s" podCreationTimestamp="2026-02-04 08:52:42 +0000 UTC" firstStartedPulling="2026-02-04 08:52:43.396320023 +0000 UTC m=+673.436377778" lastFinishedPulling="2026-02-04 08:52:45.142637771 +0000 UTC m=+675.182695526" observedRunningTime="2026-02-04 08:52:46.236692148 +0000 UTC m=+676.276749903" watchObservedRunningTime="2026-02-04 08:52:46.264589481 +0000 UTC m=+676.304647246" Feb 04 08:52:46 crc kubenswrapper[4644]: I0204 08:52:46.274869 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q22qq" podStartSLOduration=1.800212113 podStartE2EDuration="4.274853653s" podCreationTimestamp="2026-02-04 08:52:42 +0000 UTC" firstStartedPulling="2026-02-04 08:52:42.649815554 +0000 UTC m=+672.689873309" lastFinishedPulling="2026-02-04 08:52:45.124457094 +0000 UTC m=+675.164514849" observedRunningTime="2026-02-04 08:52:46.270122053 +0000 UTC m=+676.310179808" watchObservedRunningTime="2026-02-04 08:52:46.274853653 +0000 UTC m=+676.314911398" Feb 04 08:52:47 crc kubenswrapper[4644]: I0204 08:52:47.242200 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" event={"ID":"19ba6f84-da44-468a-bf88-2d5861308d59","Type":"ContainerStarted","Data":"75532fbf5d1f088c29dab92b37a0d47fcaa8bb995ed2c21c11e5a51f69f4c6b3"} Feb 04 08:52:47 crc kubenswrapper[4644]: I0204 08:52:47.264501 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-mhn4n" podStartSLOduration=2.6562605870000002 podStartE2EDuration="5.264477682s" podCreationTimestamp="2026-02-04 08:52:42 +0000 UTC" firstStartedPulling="2026-02-04 08:52:43.627557849 +0000 UTC m=+673.667615604" lastFinishedPulling="2026-02-04 08:52:46.235774944 +0000 UTC m=+676.275832699" observedRunningTime="2026-02-04 08:52:47.256038322 +0000 UTC m=+677.296096097" watchObservedRunningTime="2026-02-04 08:52:47.264477682 +0000 UTC m=+677.304535437" Feb 04 08:52:48 crc kubenswrapper[4644]: I0204 08:52:48.251528 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" event={"ID":"bf7f3412-56f2-4b59-bd63-86f748e1d27f","Type":"ContainerStarted","Data":"59e8f2cdc8842e1b0875a736b61fd9426df0ce6cc6fc55c62288ddf33aa1c16f"} Feb 04 08:52:48 crc kubenswrapper[4644]: I0204 08:52:48.272186 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-677949fd65-q44mg" podStartSLOduration=1.992255483 podStartE2EDuration="6.272163636s" podCreationTimestamp="2026-02-04 08:52:42 +0000 UTC" firstStartedPulling="2026-02-04 08:52:43.09462343 +0000 UTC m=+673.134681185" lastFinishedPulling="2026-02-04 08:52:47.374531583 +0000 UTC m=+677.414589338" observedRunningTime="2026-02-04 08:52:48.268047853 +0000 UTC m=+678.308105618" watchObservedRunningTime="2026-02-04 08:52:48.272163636 +0000 UTC m=+678.312221392" Feb 04 08:52:52 crc kubenswrapper[4644]: I0204 08:52:52.645033 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q22qq" Feb 04 08:52:53 crc kubenswrapper[4644]: I0204 08:52:53.023395 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:53 crc kubenswrapper[4644]: I0204 08:52:53.023462 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:53 crc kubenswrapper[4644]: I0204 08:52:53.032396 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:53 crc kubenswrapper[4644]: I0204 08:52:53.302143 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74d86bd554-k2fg7" Feb 04 08:52:53 crc kubenswrapper[4644]: I0204 08:52:53.371074 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:53:03 crc kubenswrapper[4644]: I0204 08:53:03.225926 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-bd5678b45-bzsxp" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.006878 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd"] Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.008527 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.021551 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.022442 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd"] Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.131438 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.131534 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slq57\" (UniqueName: \"kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.131581 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.232838 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.232897 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slq57\" (UniqueName: \"kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.232929 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.233343 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.233356 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.251373 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slq57\" (UniqueName: \"kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.324962 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:17 crc kubenswrapper[4644]: I0204 08:53:17.794605 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd"] Feb 04 08:53:17 crc kubenswrapper[4644]: W0204 08:53:17.807519 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4269a3f6_3fd6_4c8c_8dbd_08681fd28e39.slice/crio-0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b WatchSource:0}: Error finding container 0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b: Status 404 returned error can't find the container with id 0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.425023 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2mwnq" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" containerID="cri-o://42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77" gracePeriod=15 Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.478940 4644 generic.go:334] "Generic (PLEG): container finished" podID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerID="9143b75e9509810b93b72389439246274509d194ae3cd77f8e0df40d9c12aa5d" exitCode=0 Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.478983 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" event={"ID":"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39","Type":"ContainerDied","Data":"9143b75e9509810b93b72389439246274509d194ae3cd77f8e0df40d9c12aa5d"} Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.479010 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" event={"ID":"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39","Type":"ContainerStarted","Data":"0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b"} Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.741922 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2mwnq_59a2b9fd-ede9-4e85-8ad0-552716ecca00/console/0.log" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.742220 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761016 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761064 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761091 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761110 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7lx\" (UniqueName: \"kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761127 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.761159 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.762995 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca" (OuterVolumeSpecName: "service-ca") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.763454 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.763585 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config" (OuterVolumeSpecName: "console-config") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.777036 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.777359 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.777800 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx" (OuterVolumeSpecName: "kube-api-access-9m7lx") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "kube-api-access-9m7lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.862850 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert\") pod \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\" (UID: \"59a2b9fd-ede9-4e85-8ad0-552716ecca00\") " Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863081 4644 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863093 4644 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863103 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7lx\" (UniqueName: \"kubernetes.io/projected/59a2b9fd-ede9-4e85-8ad0-552716ecca00-kube-api-access-9m7lx\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863112 4644 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863120 4644 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-console-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863129 4644 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.863299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "59a2b9fd-ede9-4e85-8ad0-552716ecca00" (UID: "59a2b9fd-ede9-4e85-8ad0-552716ecca00"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:53:18 crc kubenswrapper[4644]: I0204 08:53:18.963742 4644 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59a2b9fd-ede9-4e85-8ad0-552716ecca00-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.487945 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2mwnq_59a2b9fd-ede9-4e85-8ad0-552716ecca00/console/0.log" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.488254 4644 generic.go:334] "Generic (PLEG): container finished" podID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerID="42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77" exitCode=2 Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.488283 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mwnq" event={"ID":"59a2b9fd-ede9-4e85-8ad0-552716ecca00","Type":"ContainerDied","Data":"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77"} Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.488317 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mwnq" event={"ID":"59a2b9fd-ede9-4e85-8ad0-552716ecca00","Type":"ContainerDied","Data":"b5bfe16215a339f45d9ddebe44bdd09f9d773e0a185297cf003f87147109bf13"} Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.488341 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mwnq" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.488352 4644 scope.go:117] "RemoveContainer" containerID="42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.520517 4644 scope.go:117] "RemoveContainer" containerID="42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.520699 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:53:19 crc kubenswrapper[4644]: E0204 08:53:19.521091 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77\": container with ID starting with 42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77 not found: ID does not exist" containerID="42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.521159 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77"} err="failed to get container status \"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77\": rpc error: code = NotFound desc = could not find container \"42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77\": container with ID starting with 42bfef5cbf799a1f7ccb735a90ebff65e6fb59b266cb4f82d8a3f34b1e073f77 not found: ID does not exist" Feb 04 08:53:19 crc kubenswrapper[4644]: I0204 08:53:19.524072 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2mwnq"] Feb 04 08:53:20 crc kubenswrapper[4644]: I0204 08:53:20.496445 4644 generic.go:334] "Generic (PLEG): container finished" podID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerID="51ef170484e2f07c9dae88f4e4efb8f9d4292746888e274e5a6b69b9994ef4e5" exitCode=0 Feb 04 08:53:20 crc kubenswrapper[4644]: I0204 08:53:20.496503 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" event={"ID":"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39","Type":"ContainerDied","Data":"51ef170484e2f07c9dae88f4e4efb8f9d4292746888e274e5a6b69b9994ef4e5"} Feb 04 08:53:20 crc kubenswrapper[4644]: I0204 08:53:20.687482 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" path="/var/lib/kubelet/pods/59a2b9fd-ede9-4e85-8ad0-552716ecca00/volumes" Feb 04 08:53:21 crc kubenswrapper[4644]: I0204 08:53:21.510881 4644 generic.go:334] "Generic (PLEG): container finished" podID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerID="f659556b1ac554a5aae8b43c4863c0e9834d8dc32b13262276cb09109f2e5d2e" exitCode=0 Feb 04 08:53:21 crc kubenswrapper[4644]: I0204 08:53:21.511029 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" event={"ID":"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39","Type":"ContainerDied","Data":"f659556b1ac554a5aae8b43c4863c0e9834d8dc32b13262276cb09109f2e5d2e"} Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.745766 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.911720 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slq57\" (UniqueName: \"kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57\") pod \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.912074 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util\") pod \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.912123 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle\") pod \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\" (UID: \"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39\") " Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.916255 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle" (OuterVolumeSpecName: "bundle") pod "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" (UID: "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.917162 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57" (OuterVolumeSpecName: "kube-api-access-slq57") pod "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" (UID: "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39"). InnerVolumeSpecName "kube-api-access-slq57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:53:22 crc kubenswrapper[4644]: I0204 08:53:22.926841 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util" (OuterVolumeSpecName: "util") pod "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" (UID: "4269a3f6-3fd6-4c8c-8dbd-08681fd28e39"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.013162 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slq57\" (UniqueName: \"kubernetes.io/projected/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-kube-api-access-slq57\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.013210 4644 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-util\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.013221 4644 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4269a3f6-3fd6-4c8c-8dbd-08681fd28e39-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.526313 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" event={"ID":"4269a3f6-3fd6-4c8c-8dbd-08681fd28e39","Type":"ContainerDied","Data":"0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b"} Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.526374 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d50552d75b875e80183c6779998964d21f6dcdb5253b491c4d46cf41683ea0b" Feb 04 08:53:23 crc kubenswrapper[4644]: I0204 08:53:23.526404 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.419314 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb"] Feb 04 08:53:32 crc kubenswrapper[4644]: E0204 08:53:32.419995 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420006 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" Feb 04 08:53:32 crc kubenswrapper[4644]: E0204 08:53:32.420020 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="util" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420027 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="util" Feb 04 08:53:32 crc kubenswrapper[4644]: E0204 08:53:32.420044 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="extract" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420050 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="extract" Feb 04 08:53:32 crc kubenswrapper[4644]: E0204 08:53:32.420061 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="pull" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420066 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="pull" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420151 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4269a3f6-3fd6-4c8c-8dbd-08681fd28e39" containerName="extract" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420165 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a2b9fd-ede9-4e85-8ad0-552716ecca00" containerName="console" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.420519 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.424727 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d85gm" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.428021 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.428090 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.428119 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.428507 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.444754 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88c7f\" (UniqueName: \"kubernetes.io/projected/880260a9-a2e8-463c-97ba-3b936f884d9d-kube-api-access-88c7f\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.444828 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-webhook-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.444877 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-apiservice-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.449958 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb"] Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.546298 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-apiservice-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.546361 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88c7f\" (UniqueName: \"kubernetes.io/projected/880260a9-a2e8-463c-97ba-3b936f884d9d-kube-api-access-88c7f\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.546416 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-webhook-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.552838 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-webhook-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.564137 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/880260a9-a2e8-463c-97ba-3b936f884d9d-apiservice-cert\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.565427 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88c7f\" (UniqueName: \"kubernetes.io/projected/880260a9-a2e8-463c-97ba-3b936f884d9d-kube-api-access-88c7f\") pod \"metallb-operator-controller-manager-668579b8df-dc2hb\" (UID: \"880260a9-a2e8-463c-97ba-3b936f884d9d\") " pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.712802 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p"] Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.713674 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.716549 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bmq2f" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.728850 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.729580 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.737470 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.750615 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-apiservice-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.750667 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-webhook-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.750830 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4gk\" (UniqueName: \"kubernetes.io/projected/e5e99bd5-408c-4369-bd40-b31bb61ffc43-kube-api-access-9v4gk\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.757942 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p"] Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.852091 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4gk\" (UniqueName: \"kubernetes.io/projected/e5e99bd5-408c-4369-bd40-b31bb61ffc43-kube-api-access-9v4gk\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.852165 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-apiservice-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.852195 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-webhook-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.856756 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-apiservice-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.856756 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5e99bd5-408c-4369-bd40-b31bb61ffc43-webhook-cert\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:32 crc kubenswrapper[4644]: I0204 08:53:32.876086 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4gk\" (UniqueName: \"kubernetes.io/projected/e5e99bd5-408c-4369-bd40-b31bb61ffc43-kube-api-access-9v4gk\") pod \"metallb-operator-webhook-server-b86757d9b-m6f8p\" (UID: \"e5e99bd5-408c-4369-bd40-b31bb61ffc43\") " pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:33 crc kubenswrapper[4644]: I0204 08:53:33.043912 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:33 crc kubenswrapper[4644]: I0204 08:53:33.105575 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb"] Feb 04 08:53:33 crc kubenswrapper[4644]: W0204 08:53:33.113552 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod880260a9_a2e8_463c_97ba_3b936f884d9d.slice/crio-19f452d8564a7290a04821b997501c21d1295a1161928650f8c5cdc437b0c6b9 WatchSource:0}: Error finding container 19f452d8564a7290a04821b997501c21d1295a1161928650f8c5cdc437b0c6b9: Status 404 returned error can't find the container with id 19f452d8564a7290a04821b997501c21d1295a1161928650f8c5cdc437b0c6b9 Feb 04 08:53:33 crc kubenswrapper[4644]: I0204 08:53:33.325745 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p"] Feb 04 08:53:33 crc kubenswrapper[4644]: W0204 08:53:33.334236 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e99bd5_408c_4369_bd40_b31bb61ffc43.slice/crio-9d788cbfafa74972835e9e5a128ad10b19ba5ec2b8795e300c2d3d2fe719c5df WatchSource:0}: Error finding container 9d788cbfafa74972835e9e5a128ad10b19ba5ec2b8795e300c2d3d2fe719c5df: Status 404 returned error can't find the container with id 9d788cbfafa74972835e9e5a128ad10b19ba5ec2b8795e300c2d3d2fe719c5df Feb 04 08:53:33 crc kubenswrapper[4644]: I0204 08:53:33.580489 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" event={"ID":"880260a9-a2e8-463c-97ba-3b936f884d9d","Type":"ContainerStarted","Data":"19f452d8564a7290a04821b997501c21d1295a1161928650f8c5cdc437b0c6b9"} Feb 04 08:53:33 crc kubenswrapper[4644]: I0204 08:53:33.581542 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" event={"ID":"e5e99bd5-408c-4369-bd40-b31bb61ffc43","Type":"ContainerStarted","Data":"9d788cbfafa74972835e9e5a128ad10b19ba5ec2b8795e300c2d3d2fe719c5df"} Feb 04 08:53:35 crc kubenswrapper[4644]: I0204 08:53:35.554938 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:53:35 crc kubenswrapper[4644]: I0204 08:53:35.555227 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.610306 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" event={"ID":"e5e99bd5-408c-4369-bd40-b31bb61ffc43","Type":"ContainerStarted","Data":"42a151820155e0e4f0b5414d55834c311bc0f87efae49ad7fa5f6a660d950400"} Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.610765 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.614466 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" event={"ID":"880260a9-a2e8-463c-97ba-3b936f884d9d","Type":"ContainerStarted","Data":"722d19b54f7197647196dac0558002f2b8957e77fa1aaec30bd766529ef72c80"} Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.614635 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.629725 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" podStartSLOduration=1.585978899 podStartE2EDuration="6.629704974s" podCreationTimestamp="2026-02-04 08:53:32 +0000 UTC" firstStartedPulling="2026-02-04 08:53:33.337932744 +0000 UTC m=+723.377990499" lastFinishedPulling="2026-02-04 08:53:38.381658819 +0000 UTC m=+728.421716574" observedRunningTime="2026-02-04 08:53:38.625242802 +0000 UTC m=+728.665300577" watchObservedRunningTime="2026-02-04 08:53:38.629704974 +0000 UTC m=+728.669762729" Feb 04 08:53:38 crc kubenswrapper[4644]: I0204 08:53:38.649801 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" podStartSLOduration=1.39887739 podStartE2EDuration="6.649780333s" podCreationTimestamp="2026-02-04 08:53:32 +0000 UTC" firstStartedPulling="2026-02-04 08:53:33.117456132 +0000 UTC m=+723.157513877" lastFinishedPulling="2026-02-04 08:53:38.368359065 +0000 UTC m=+728.408416820" observedRunningTime="2026-02-04 08:53:38.648776565 +0000 UTC m=+728.688834320" watchObservedRunningTime="2026-02-04 08:53:38.649780333 +0000 UTC m=+728.689838088" Feb 04 08:53:53 crc kubenswrapper[4644]: I0204 08:53:53.057930 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b86757d9b-m6f8p" Feb 04 08:54:05 crc kubenswrapper[4644]: I0204 08:54:05.555588 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:54:05 crc kubenswrapper[4644]: I0204 08:54:05.556056 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:54:10 crc kubenswrapper[4644]: I0204 08:54:10.141364 4644 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 08:54:12 crc kubenswrapper[4644]: I0204 08:54:12.741013 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-668579b8df-dc2hb" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.514062 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6bp8j"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.516313 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.518497 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vm7q6" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.518895 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.519899 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.537349 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.538149 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.540264 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.570395 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578177 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/108d8162-12e1-4dfa-ab06-a416b6880150-metrics-certs\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578233 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-reloader\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578265 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-sockets\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578424 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-conf\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578479 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwt6\" (UniqueName: \"kubernetes.io/projected/108d8162-12e1-4dfa-ab06-a416b6880150-kube-api-access-9rwt6\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578506 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-metrics\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578614 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578679 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbxzd\" (UniqueName: \"kubernetes.io/projected/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-kube-api-access-xbxzd\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.578762 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/108d8162-12e1-4dfa-ab06-a416b6880150-frr-startup\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.662691 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-twwks"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.663513 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.670956 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.671013 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8ht7m" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.671019 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.671088 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.679668 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-conf\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.679720 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.679748 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwt6\" (UniqueName: \"kubernetes.io/projected/108d8162-12e1-4dfa-ab06-a416b6880150-kube-api-access-9rwt6\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.679775 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-metrics\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.679886 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.680052 4644 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.680128 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert podName:fd959e6b-00cf-4818-8b5a-0ad09c060e5e nodeName:}" failed. No retries permitted until 2026-02-04 08:54:14.180104542 +0000 UTC m=+764.220162297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert") pod "frr-k8s-webhook-server-97dfd4f9f-jcnsg" (UID: "fd959e6b-00cf-4818-8b5a-0ad09c060e5e") : secret "frr-k8s-webhook-server-cert" not found Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.680581 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-metrics\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.680614 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbxzd\" (UniqueName: \"kubernetes.io/projected/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-kube-api-access-xbxzd\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.680795 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-conf\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.680883 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29lcc\" (UniqueName: \"kubernetes.io/projected/4496f888-8e49-4a88-b753-7f2d55dc317a-kube-api-access-29lcc\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.680989 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/108d8162-12e1-4dfa-ab06-a416b6880150-frr-startup\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.681103 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.681535 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/108d8162-12e1-4dfa-ab06-a416b6880150-metrics-certs\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.681684 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/108d8162-12e1-4dfa-ab06-a416b6880150-frr-startup\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.681871 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-reloader\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.682148 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-reloader\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.682209 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-sockets\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.682590 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/108d8162-12e1-4dfa-ab06-a416b6880150-frr-sockets\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.682719 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4496f888-8e49-4a88-b753-7f2d55dc317a-metallb-excludel2\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.690297 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/108d8162-12e1-4dfa-ab06-a416b6880150-metrics-certs\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.690981 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-9c48fdfd-z7zmw"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.691874 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.706262 4644 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.720769 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwt6\" (UniqueName: \"kubernetes.io/projected/108d8162-12e1-4dfa-ab06-a416b6880150-kube-api-access-9rwt6\") pod \"frr-k8s-6bp8j\" (UID: \"108d8162-12e1-4dfa-ab06-a416b6880150\") " pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.739074 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbxzd\" (UniqueName: \"kubernetes.io/projected/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-kube-api-access-xbxzd\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.773224 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-9c48fdfd-z7zmw"] Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.799885 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4496f888-8e49-4a88-b753-7f2d55dc317a-metallb-excludel2\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.799931 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.799977 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29lcc\" (UniqueName: \"kubernetes.io/projected/4496f888-8e49-4a88-b753-7f2d55dc317a-kube-api-access-29lcc\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.800010 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.800100 4644 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.800139 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist podName:4496f888-8e49-4a88-b753-7f2d55dc317a nodeName:}" failed. No retries permitted until 2026-02-04 08:54:14.300126045 +0000 UTC m=+764.340183800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist") pod "speaker-twwks" (UID: "4496f888-8e49-4a88-b753-7f2d55dc317a") : secret "metallb-memberlist" not found Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.801016 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4496f888-8e49-4a88-b753-7f2d55dc317a-metallb-excludel2\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.801076 4644 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 04 08:54:13 crc kubenswrapper[4644]: E0204 08:54:13.801100 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs podName:4496f888-8e49-4a88-b753-7f2d55dc317a nodeName:}" failed. No retries permitted until 2026-02-04 08:54:14.301092462 +0000 UTC m=+764.341150217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs") pod "speaker-twwks" (UID: "4496f888-8e49-4a88-b753-7f2d55dc317a") : secret "speaker-certs-secret" not found Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.837157 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.837942 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29lcc\" (UniqueName: \"kubernetes.io/projected/4496f888-8e49-4a88-b753-7f2d55dc317a-kube-api-access-29lcc\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.901076 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.901186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jh7t\" (UniqueName: \"kubernetes.io/projected/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-kube-api-access-6jh7t\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:13 crc kubenswrapper[4644]: I0204 08:54:13.901236 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-cert\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.002999 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-cert\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.003230 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: E0204 08:54:14.003316 4644 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.003576 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jh7t\" (UniqueName: \"kubernetes.io/projected/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-kube-api-access-6jh7t\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: E0204 08:54:14.003740 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs podName:ef11c1e1-54cf-4428-9a73-9a8eb183dde6 nodeName:}" failed. No retries permitted until 2026-02-04 08:54:14.503717095 +0000 UTC m=+764.543774900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs") pod "controller-9c48fdfd-z7zmw" (UID: "ef11c1e1-54cf-4428-9a73-9a8eb183dde6") : secret "controller-certs-secret" not found Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.008015 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-cert\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.021344 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jh7t\" (UniqueName: \"kubernetes.io/projected/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-kube-api-access-6jh7t\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.207473 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.210354 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd959e6b-00cf-4818-8b5a-0ad09c060e5e-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jcnsg\" (UID: \"fd959e6b-00cf-4818-8b5a-0ad09c060e5e\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.308181 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.308274 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:14 crc kubenswrapper[4644]: E0204 08:54:14.308606 4644 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 04 08:54:14 crc kubenswrapper[4644]: E0204 08:54:14.308740 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist podName:4496f888-8e49-4a88-b753-7f2d55dc317a nodeName:}" failed. No retries permitted until 2026-02-04 08:54:15.308721458 +0000 UTC m=+765.348779223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist") pod "speaker-twwks" (UID: "4496f888-8e49-4a88-b753-7f2d55dc317a") : secret "metallb-memberlist" not found Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.310889 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-metrics-certs\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.452827 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.510385 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.519112 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef11c1e1-54cf-4428-9a73-9a8eb183dde6-metrics-certs\") pod \"controller-9c48fdfd-z7zmw\" (UID: \"ef11c1e1-54cf-4428-9a73-9a8eb183dde6\") " pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.676354 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg"] Feb 04 08:54:14 crc kubenswrapper[4644]: W0204 08:54:14.688356 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd959e6b_00cf_4818_8b5a_0ad09c060e5e.slice/crio-3369bc5702373354156d15d3c33c9591ab0e0bd11cf19e22d7e65e78dbb43e2b WatchSource:0}: Error finding container 3369bc5702373354156d15d3c33c9591ab0e0bd11cf19e22d7e65e78dbb43e2b: Status 404 returned error can't find the container with id 3369bc5702373354156d15d3c33c9591ab0e0bd11cf19e22d7e65e78dbb43e2b Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.718340 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.829064 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"90c8bedf0e17ae3d4cfe696af80b62c95306bf730cdd11150cd315955b4c4b13"} Feb 04 08:54:14 crc kubenswrapper[4644]: I0204 08:54:14.834233 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" event={"ID":"fd959e6b-00cf-4818-8b5a-0ad09c060e5e","Type":"ContainerStarted","Data":"3369bc5702373354156d15d3c33c9591ab0e0bd11cf19e22d7e65e78dbb43e2b"} Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.159410 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-9c48fdfd-z7zmw"] Feb 04 08:54:15 crc kubenswrapper[4644]: W0204 08:54:15.179930 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef11c1e1_54cf_4428_9a73_9a8eb183dde6.slice/crio-76b8c8772f110832dcfa9abc501d1c2eb2ac6072ed57a5bc68e112d9411d8035 WatchSource:0}: Error finding container 76b8c8772f110832dcfa9abc501d1c2eb2ac6072ed57a5bc68e112d9411d8035: Status 404 returned error can't find the container with id 76b8c8772f110832dcfa9abc501d1c2eb2ac6072ed57a5bc68e112d9411d8035 Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.324850 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:15 crc kubenswrapper[4644]: E0204 08:54:15.325085 4644 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 04 08:54:15 crc kubenswrapper[4644]: E0204 08:54:15.325161 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist podName:4496f888-8e49-4a88-b753-7f2d55dc317a nodeName:}" failed. No retries permitted until 2026-02-04 08:54:17.325141571 +0000 UTC m=+767.365199336 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist") pod "speaker-twwks" (UID: "4496f888-8e49-4a88-b753-7f2d55dc317a") : secret "metallb-memberlist" not found Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.843248 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-z7zmw" event={"ID":"ef11c1e1-54cf-4428-9a73-9a8eb183dde6","Type":"ContainerStarted","Data":"36111e43af048628953a09ccdb1ca4bbb5bdbe20581cf99ced66f8a65d7d3e17"} Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.843292 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-z7zmw" event={"ID":"ef11c1e1-54cf-4428-9a73-9a8eb183dde6","Type":"ContainerStarted","Data":"1f8224aabe1bd9ef1af4ce0ff511f63c72f84d8baf14dbf0e3fe6f689b0328c0"} Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.843302 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-z7zmw" event={"ID":"ef11c1e1-54cf-4428-9a73-9a8eb183dde6","Type":"ContainerStarted","Data":"76b8c8772f110832dcfa9abc501d1c2eb2ac6072ed57a5bc68e112d9411d8035"} Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.843550 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:15 crc kubenswrapper[4644]: I0204 08:54:15.879080 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-9c48fdfd-z7zmw" podStartSLOduration=2.879026481 podStartE2EDuration="2.879026481s" podCreationTimestamp="2026-02-04 08:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:54:15.870754675 +0000 UTC m=+765.910812440" watchObservedRunningTime="2026-02-04 08:54:15.879026481 +0000 UTC m=+765.919084236" Feb 04 08:54:17 crc kubenswrapper[4644]: I0204 08:54:17.356238 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:17 crc kubenswrapper[4644]: I0204 08:54:17.380129 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4496f888-8e49-4a88-b753-7f2d55dc317a-memberlist\") pod \"speaker-twwks\" (UID: \"4496f888-8e49-4a88-b753-7f2d55dc317a\") " pod="metallb-system/speaker-twwks" Feb 04 08:54:17 crc kubenswrapper[4644]: I0204 08:54:17.581285 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-twwks" Feb 04 08:54:17 crc kubenswrapper[4644]: I0204 08:54:17.855205 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-twwks" event={"ID":"4496f888-8e49-4a88-b753-7f2d55dc317a","Type":"ContainerStarted","Data":"376df2337c9d2fc3394e7db3ebdf8ae5bdfc0a2d0f521124813e61ea79b168fa"} Feb 04 08:54:18 crc kubenswrapper[4644]: I0204 08:54:18.869545 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-twwks" event={"ID":"4496f888-8e49-4a88-b753-7f2d55dc317a","Type":"ContainerStarted","Data":"8113df018fbe5075359f82f16425b11ae8c35c42d132aa141146881a785f6a2b"} Feb 04 08:54:18 crc kubenswrapper[4644]: I0204 08:54:18.869841 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-twwks" event={"ID":"4496f888-8e49-4a88-b753-7f2d55dc317a","Type":"ContainerStarted","Data":"100a79c166ba0671f45730b3f3b3c18002ebbfaa2fce91a30fc9e6f622e5fd44"} Feb 04 08:54:18 crc kubenswrapper[4644]: I0204 08:54:18.870109 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-twwks" Feb 04 08:54:18 crc kubenswrapper[4644]: I0204 08:54:18.899151 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-twwks" podStartSLOduration=5.899135693 podStartE2EDuration="5.899135693s" podCreationTimestamp="2026-02-04 08:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:54:18.898771833 +0000 UTC m=+768.938829598" watchObservedRunningTime="2026-02-04 08:54:18.899135693 +0000 UTC m=+768.939193448" Feb 04 08:54:22 crc kubenswrapper[4644]: I0204 08:54:22.897634 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" event={"ID":"fd959e6b-00cf-4818-8b5a-0ad09c060e5e","Type":"ContainerStarted","Data":"953cd4f0be675ca0d5a2f3f82cf493e9a695a4b33376017e9d1903cc0ab47e52"} Feb 04 08:54:22 crc kubenswrapper[4644]: I0204 08:54:22.898430 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:22 crc kubenswrapper[4644]: I0204 08:54:22.900368 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"521d3402ba8745073b2d8df0947afd2ccc01fcd0be563fea4db33d5714240831"} Feb 04 08:54:22 crc kubenswrapper[4644]: I0204 08:54:22.925553 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" podStartSLOduration=1.9667615779999998 podStartE2EDuration="9.925528462s" podCreationTimestamp="2026-02-04 08:54:13 +0000 UTC" firstStartedPulling="2026-02-04 08:54:14.696661999 +0000 UTC m=+764.736719754" lastFinishedPulling="2026-02-04 08:54:22.655428843 +0000 UTC m=+772.695486638" observedRunningTime="2026-02-04 08:54:22.920424922 +0000 UTC m=+772.960482697" watchObservedRunningTime="2026-02-04 08:54:22.925528462 +0000 UTC m=+772.965586227" Feb 04 08:54:23 crc kubenswrapper[4644]: I0204 08:54:23.907027 4644 generic.go:334] "Generic (PLEG): container finished" podID="108d8162-12e1-4dfa-ab06-a416b6880150" containerID="521d3402ba8745073b2d8df0947afd2ccc01fcd0be563fea4db33d5714240831" exitCode=0 Feb 04 08:54:23 crc kubenswrapper[4644]: I0204 08:54:23.907376 4644 generic.go:334] "Generic (PLEG): container finished" podID="108d8162-12e1-4dfa-ab06-a416b6880150" containerID="bd3b2bd33c59e940bf9885535a6b06d40061fed3f22b89a20a5ec587e10981d8" exitCode=0 Feb 04 08:54:23 crc kubenswrapper[4644]: I0204 08:54:23.907140 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerDied","Data":"521d3402ba8745073b2d8df0947afd2ccc01fcd0be563fea4db33d5714240831"} Feb 04 08:54:23 crc kubenswrapper[4644]: I0204 08:54:23.907449 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerDied","Data":"bd3b2bd33c59e940bf9885535a6b06d40061fed3f22b89a20a5ec587e10981d8"} Feb 04 08:54:24 crc kubenswrapper[4644]: I0204 08:54:24.916068 4644 generic.go:334] "Generic (PLEG): container finished" podID="108d8162-12e1-4dfa-ab06-a416b6880150" containerID="755b22b7d947d792c7d07fa54617a786b1f6f4afc18e23890d5f15a29f46414c" exitCode=0 Feb 04 08:54:24 crc kubenswrapper[4644]: I0204 08:54:24.916132 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerDied","Data":"755b22b7d947d792c7d07fa54617a786b1f6f4afc18e23890d5f15a29f46414c"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926501 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"2144fc6b6fdcc4d242aa901a7aecead40318b17e2dfbc150d818afd45049b4e2"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926806 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"4e210031737f48664d1983df4a759dc2baf618360dacc580d473970b63a71073"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926819 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"0db5ad076c02675b25bb1295663bbfff81de0cc33a492d032bba5efc7a7873be"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926833 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926844 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"456a556d2a2668d724d6bf5ddd7bb6b8d38a446d75809561613fa0c01d02a285"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926855 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"ed2b4508dd52152001fa134cdb227c605944d43fce0ca7b58069ba497fb838f3"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.926865 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6bp8j" event={"ID":"108d8162-12e1-4dfa-ab06-a416b6880150","Type":"ContainerStarted","Data":"c22d133dbe0f78273fce70a6749c58ff671c8664c9de080aaad6d93cc7a389a0"} Feb 04 08:54:25 crc kubenswrapper[4644]: I0204 08:54:25.967922 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6bp8j" podStartSLOduration=4.235412044 podStartE2EDuration="12.967899812s" podCreationTimestamp="2026-02-04 08:54:13 +0000 UTC" firstStartedPulling="2026-02-04 08:54:13.995314165 +0000 UTC m=+764.035371920" lastFinishedPulling="2026-02-04 08:54:22.727801913 +0000 UTC m=+772.767859688" observedRunningTime="2026-02-04 08:54:25.954850715 +0000 UTC m=+775.994908480" watchObservedRunningTime="2026-02-04 08:54:25.967899812 +0000 UTC m=+776.007957557" Feb 04 08:54:27 crc kubenswrapper[4644]: I0204 08:54:27.585599 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-twwks" Feb 04 08:54:28 crc kubenswrapper[4644]: I0204 08:54:28.837491 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:28 crc kubenswrapper[4644]: I0204 08:54:28.878715 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.472705 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.473404 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.477495 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.478416 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.487709 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-r428x" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.491949 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.646837 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndl5p\" (UniqueName: \"kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p\") pod \"openstack-operator-index-svjms\" (UID: \"5d3ac18f-82ae-46be-bb2a-49c11fee4f29\") " pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.747702 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndl5p\" (UniqueName: \"kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p\") pod \"openstack-operator-index-svjms\" (UID: \"5d3ac18f-82ae-46be-bb2a-49c11fee4f29\") " pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.761568 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.772368 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 04 08:54:30 crc kubenswrapper[4644]: I0204 08:54:30.797373 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndl5p\" (UniqueName: \"kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p\") pod \"openstack-operator-index-svjms\" (UID: \"5d3ac18f-82ae-46be-bb2a-49c11fee4f29\") " pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:31 crc kubenswrapper[4644]: I0204 08:54:31.091926 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-r428x" Feb 04 08:54:31 crc kubenswrapper[4644]: I0204 08:54:31.100257 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:31 crc kubenswrapper[4644]: I0204 08:54:31.529561 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:31 crc kubenswrapper[4644]: I0204 08:54:31.959686 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-svjms" event={"ID":"5d3ac18f-82ae-46be-bb2a-49c11fee4f29","Type":"ContainerStarted","Data":"8ff856cca72e849977f44eb0c195d3582b6edab76a307571019e7ddf4b9dfe1e"} Feb 04 08:54:33 crc kubenswrapper[4644]: I0204 08:54:33.636626 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:33 crc kubenswrapper[4644]: I0204 08:54:33.973543 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-svjms" event={"ID":"5d3ac18f-82ae-46be-bb2a-49c11fee4f29","Type":"ContainerStarted","Data":"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510"} Feb 04 08:54:33 crc kubenswrapper[4644]: I0204 08:54:33.973731 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-svjms" podUID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" containerName="registry-server" containerID="cri-o://d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510" gracePeriod=2 Feb 04 08:54:33 crc kubenswrapper[4644]: I0204 08:54:33.996728 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-svjms" podStartSLOduration=1.929853416 podStartE2EDuration="3.996701408s" podCreationTimestamp="2026-02-04 08:54:30 +0000 UTC" firstStartedPulling="2026-02-04 08:54:31.540697943 +0000 UTC m=+781.580755718" lastFinishedPulling="2026-02-04 08:54:33.607545955 +0000 UTC m=+783.647603710" observedRunningTime="2026-02-04 08:54:33.995529936 +0000 UTC m=+784.035587711" watchObservedRunningTime="2026-02-04 08:54:33.996701408 +0000 UTC m=+784.036759193" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.254850 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-85gvc"] Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.256128 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.265075 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-85gvc"] Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.295867 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-425bt\" (UniqueName: \"kubernetes.io/projected/fad001d0-1475-450d-97d9-714d13e42d37-kube-api-access-425bt\") pod \"openstack-operator-index-85gvc\" (UID: \"fad001d0-1475-450d-97d9-714d13e42d37\") " pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.321118 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.397271 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-425bt\" (UniqueName: \"kubernetes.io/projected/fad001d0-1475-450d-97d9-714d13e42d37-kube-api-access-425bt\") pod \"openstack-operator-index-85gvc\" (UID: \"fad001d0-1475-450d-97d9-714d13e42d37\") " pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.416137 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-425bt\" (UniqueName: \"kubernetes.io/projected/fad001d0-1475-450d-97d9-714d13e42d37-kube-api-access-425bt\") pod \"openstack-operator-index-85gvc\" (UID: \"fad001d0-1475-450d-97d9-714d13e42d37\") " pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.456339 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jcnsg" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.498304 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndl5p\" (UniqueName: \"kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p\") pod \"5d3ac18f-82ae-46be-bb2a-49c11fee4f29\" (UID: \"5d3ac18f-82ae-46be-bb2a-49c11fee4f29\") " Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.501835 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p" (OuterVolumeSpecName: "kube-api-access-ndl5p") pod "5d3ac18f-82ae-46be-bb2a-49c11fee4f29" (UID: "5d3ac18f-82ae-46be-bb2a-49c11fee4f29"). InnerVolumeSpecName "kube-api-access-ndl5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.575706 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.599692 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndl5p\" (UniqueName: \"kubernetes.io/projected/5d3ac18f-82ae-46be-bb2a-49c11fee4f29-kube-api-access-ndl5p\") on node \"crc\" DevicePath \"\"" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.727359 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-9c48fdfd-z7zmw" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.864277 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-85gvc"] Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.980818 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85gvc" event={"ID":"fad001d0-1475-450d-97d9-714d13e42d37","Type":"ContainerStarted","Data":"e7c703e2711fe542cd183dfa0d963879bed4c37841832542a78d37c454e40c5b"} Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.982707 4644 generic.go:334] "Generic (PLEG): container finished" podID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" containerID="d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510" exitCode=0 Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.982739 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-svjms" event={"ID":"5d3ac18f-82ae-46be-bb2a-49c11fee4f29","Type":"ContainerDied","Data":"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510"} Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.982759 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-svjms" event={"ID":"5d3ac18f-82ae-46be-bb2a-49c11fee4f29","Type":"ContainerDied","Data":"8ff856cca72e849977f44eb0c195d3582b6edab76a307571019e7ddf4b9dfe1e"} Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.982780 4644 scope.go:117] "RemoveContainer" containerID="d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510" Feb 04 08:54:34 crc kubenswrapper[4644]: I0204 08:54:34.982877 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-svjms" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.002182 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.007435 4644 scope.go:117] "RemoveContainer" containerID="d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510" Feb 04 08:54:35 crc kubenswrapper[4644]: E0204 08:54:35.007924 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510\": container with ID starting with d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510 not found: ID does not exist" containerID="d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.007955 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510"} err="failed to get container status \"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510\": rpc error: code = NotFound desc = could not find container \"d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510\": container with ID starting with d20532d296d64517eff49cf33c60a4fe4db46fc31cf62aeccbf6821e9dbac510 not found: ID does not exist" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.009595 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-svjms"] Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.554901 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.554987 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.555053 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.555996 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.556103 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119" gracePeriod=600 Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.991909 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119" exitCode=0 Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.991973 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119"} Feb 04 08:54:35 crc kubenswrapper[4644]: I0204 08:54:35.992021 4644 scope.go:117] "RemoveContainer" containerID="c5a3e4c401265e263cbe126a63f7ddbc0c32c42ae952e4d4306ad097f58ca211" Feb 04 08:54:36 crc kubenswrapper[4644]: I0204 08:54:36.667274 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" path="/var/lib/kubelet/pods/5d3ac18f-82ae-46be-bb2a-49c11fee4f29/volumes" Feb 04 08:54:37 crc kubenswrapper[4644]: I0204 08:54:37.004206 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85gvc" event={"ID":"fad001d0-1475-450d-97d9-714d13e42d37","Type":"ContainerStarted","Data":"4567f3b3b312435cbff446a6144b58a6b3533836cac0949659a7fd7975fc3bbb"} Feb 04 08:54:37 crc kubenswrapper[4644]: I0204 08:54:37.017068 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703"} Feb 04 08:54:37 crc kubenswrapper[4644]: I0204 08:54:37.022090 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-85gvc" podStartSLOduration=1.498257384 podStartE2EDuration="3.022069s" podCreationTimestamp="2026-02-04 08:54:34 +0000 UTC" firstStartedPulling="2026-02-04 08:54:34.878063716 +0000 UTC m=+784.918121471" lastFinishedPulling="2026-02-04 08:54:36.401875312 +0000 UTC m=+786.441933087" observedRunningTime="2026-02-04 08:54:37.019475929 +0000 UTC m=+787.059533774" watchObservedRunningTime="2026-02-04 08:54:37.022069 +0000 UTC m=+787.062126775" Feb 04 08:54:43 crc kubenswrapper[4644]: I0204 08:54:43.843010 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6bp8j" Feb 04 08:54:44 crc kubenswrapper[4644]: I0204 08:54:44.576740 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:44 crc kubenswrapper[4644]: I0204 08:54:44.576819 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:44 crc kubenswrapper[4644]: I0204 08:54:44.624282 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:45 crc kubenswrapper[4644]: I0204 08:54:45.094996 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-85gvc" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.313691 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp"] Feb 04 08:54:46 crc kubenswrapper[4644]: E0204 08:54:46.315018 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" containerName="registry-server" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.315129 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" containerName="registry-server" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.315496 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3ac18f-82ae-46be-bb2a-49c11fee4f29" containerName="registry-server" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.316559 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.318364 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lm2qw" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.323251 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp"] Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.385500 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.385646 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.385686 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmh5\" (UniqueName: \"kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.487125 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.487488 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.487617 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmh5\" (UniqueName: \"kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.487711 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.487879 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.509422 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmh5\" (UniqueName: \"kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5\") pod \"f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.636471 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:46 crc kubenswrapper[4644]: I0204 08:54:46.852303 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp"] Feb 04 08:54:46 crc kubenswrapper[4644]: W0204 08:54:46.857514 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a3c41f_925d_4ff3_a3ff_77f9c35216fb.slice/crio-7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0 WatchSource:0}: Error finding container 7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0: Status 404 returned error can't find the container with id 7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0 Feb 04 08:54:47 crc kubenswrapper[4644]: I0204 08:54:47.085357 4644 generic.go:334] "Generic (PLEG): container finished" podID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerID="08eb069e65cfdcd2654698401a14c28f89d27f4dba2cf109c81a9f1c2fd91b34" exitCode=0 Feb 04 08:54:47 crc kubenswrapper[4644]: I0204 08:54:47.085558 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" event={"ID":"92a3c41f-925d-4ff3-a3ff-77f9c35216fb","Type":"ContainerDied","Data":"08eb069e65cfdcd2654698401a14c28f89d27f4dba2cf109c81a9f1c2fd91b34"} Feb 04 08:54:47 crc kubenswrapper[4644]: I0204 08:54:47.085694 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" event={"ID":"92a3c41f-925d-4ff3-a3ff-77f9c35216fb","Type":"ContainerStarted","Data":"7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0"} Feb 04 08:54:48 crc kubenswrapper[4644]: I0204 08:54:48.095207 4644 generic.go:334] "Generic (PLEG): container finished" podID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerID="e5d96c695ec8367c617d5443841ff462253398b3e6bc711736ff7b9419d68ea1" exitCode=0 Feb 04 08:54:48 crc kubenswrapper[4644]: I0204 08:54:48.095253 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" event={"ID":"92a3c41f-925d-4ff3-a3ff-77f9c35216fb","Type":"ContainerDied","Data":"e5d96c695ec8367c617d5443841ff462253398b3e6bc711736ff7b9419d68ea1"} Feb 04 08:54:49 crc kubenswrapper[4644]: I0204 08:54:49.103958 4644 generic.go:334] "Generic (PLEG): container finished" podID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerID="a482684be4030ae288e56d4f2b76b15e623de0428aede3b6ab6a9748fa4cebcd" exitCode=0 Feb 04 08:54:49 crc kubenswrapper[4644]: I0204 08:54:49.104008 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" event={"ID":"92a3c41f-925d-4ff3-a3ff-77f9c35216fb","Type":"ContainerDied","Data":"a482684be4030ae288e56d4f2b76b15e623de0428aede3b6ab6a9748fa4cebcd"} Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.453025 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.642545 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util\") pod \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.642653 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqmh5\" (UniqueName: \"kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5\") pod \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.642836 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle\") pod \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\" (UID: \"92a3c41f-925d-4ff3-a3ff-77f9c35216fb\") " Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.643530 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle" (OuterVolumeSpecName: "bundle") pod "92a3c41f-925d-4ff3-a3ff-77f9c35216fb" (UID: "92a3c41f-925d-4ff3-a3ff-77f9c35216fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.657637 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5" (OuterVolumeSpecName: "kube-api-access-lqmh5") pod "92a3c41f-925d-4ff3-a3ff-77f9c35216fb" (UID: "92a3c41f-925d-4ff3-a3ff-77f9c35216fb"). InnerVolumeSpecName "kube-api-access-lqmh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.658047 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util" (OuterVolumeSpecName: "util") pod "92a3c41f-925d-4ff3-a3ff-77f9c35216fb" (UID: "92a3c41f-925d-4ff3-a3ff-77f9c35216fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.744400 4644 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-util\") on node \"crc\" DevicePath \"\"" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.744438 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqmh5\" (UniqueName: \"kubernetes.io/projected/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-kube-api-access-lqmh5\") on node \"crc\" DevicePath \"\"" Feb 04 08:54:50 crc kubenswrapper[4644]: I0204 08:54:50.744475 4644 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92a3c41f-925d-4ff3-a3ff-77f9c35216fb-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:54:51 crc kubenswrapper[4644]: I0204 08:54:51.135852 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" event={"ID":"92a3c41f-925d-4ff3-a3ff-77f9c35216fb","Type":"ContainerDied","Data":"7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0"} Feb 04 08:54:51 crc kubenswrapper[4644]: I0204 08:54:51.136091 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7845b51b2235c3eabf2897d34cc4547a2a0c2dab31e80f495701083b43d5cbf0" Feb 04 08:54:51 crc kubenswrapper[4644]: I0204 08:54:51.135966 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.925918 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v"] Feb 04 08:54:52 crc kubenswrapper[4644]: E0204 08:54:52.926206 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="pull" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.926220 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="pull" Feb 04 08:54:52 crc kubenswrapper[4644]: E0204 08:54:52.926236 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="extract" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.926243 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="extract" Feb 04 08:54:52 crc kubenswrapper[4644]: E0204 08:54:52.926268 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="util" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.926275 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="util" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.926422 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a3c41f-925d-4ff3-a3ff-77f9c35216fb" containerName="extract" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.926950 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.930239 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xsg8n" Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.953154 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v"] Feb 04 08:54:52 crc kubenswrapper[4644]: I0204 08:54:52.975954 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572fr\" (UniqueName: \"kubernetes.io/projected/9e804a34-fb91-4608-84f0-08283597694b-kube-api-access-572fr\") pod \"openstack-operator-controller-init-7779fb4444-rsl7v\" (UID: \"9e804a34-fb91-4608-84f0-08283597694b\") " pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:53 crc kubenswrapper[4644]: I0204 08:54:53.076789 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572fr\" (UniqueName: \"kubernetes.io/projected/9e804a34-fb91-4608-84f0-08283597694b-kube-api-access-572fr\") pod \"openstack-operator-controller-init-7779fb4444-rsl7v\" (UID: \"9e804a34-fb91-4608-84f0-08283597694b\") " pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:53 crc kubenswrapper[4644]: I0204 08:54:53.103382 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572fr\" (UniqueName: \"kubernetes.io/projected/9e804a34-fb91-4608-84f0-08283597694b-kube-api-access-572fr\") pod \"openstack-operator-controller-init-7779fb4444-rsl7v\" (UID: \"9e804a34-fb91-4608-84f0-08283597694b\") " pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:53 crc kubenswrapper[4644]: I0204 08:54:53.241254 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:53 crc kubenswrapper[4644]: I0204 08:54:53.727607 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v"] Feb 04 08:54:53 crc kubenswrapper[4644]: W0204 08:54:53.735029 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e804a34_fb91_4608_84f0_08283597694b.slice/crio-c600c3872e0c940ad7d3f8cd457ee96f480f20d9ceac46885ceb210cf210bf46 WatchSource:0}: Error finding container c600c3872e0c940ad7d3f8cd457ee96f480f20d9ceac46885ceb210cf210bf46: Status 404 returned error can't find the container with id c600c3872e0c940ad7d3f8cd457ee96f480f20d9ceac46885ceb210cf210bf46 Feb 04 08:54:54 crc kubenswrapper[4644]: I0204 08:54:54.152579 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" event={"ID":"9e804a34-fb91-4608-84f0-08283597694b","Type":"ContainerStarted","Data":"c600c3872e0c940ad7d3f8cd457ee96f480f20d9ceac46885ceb210cf210bf46"} Feb 04 08:54:58 crc kubenswrapper[4644]: I0204 08:54:58.182903 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" event={"ID":"9e804a34-fb91-4608-84f0-08283597694b","Type":"ContainerStarted","Data":"6a24dd1629dd28a365fe32f0a3b9fdf92338ae70b0b59ab0372c35c49a875626"} Feb 04 08:54:58 crc kubenswrapper[4644]: I0204 08:54:58.183445 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:54:58 crc kubenswrapper[4644]: I0204 08:54:58.211294 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" podStartSLOduration=2.111908783 podStartE2EDuration="6.211279738s" podCreationTimestamp="2026-02-04 08:54:52 +0000 UTC" firstStartedPulling="2026-02-04 08:54:53.73702779 +0000 UTC m=+803.777085545" lastFinishedPulling="2026-02-04 08:54:57.836398745 +0000 UTC m=+807.876456500" observedRunningTime="2026-02-04 08:54:58.209142009 +0000 UTC m=+808.249199764" watchObservedRunningTime="2026-02-04 08:54:58.211279738 +0000 UTC m=+808.251337493" Feb 04 08:55:03 crc kubenswrapper[4644]: I0204 08:55:03.243657 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7779fb4444-rsl7v" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.272318 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.273784 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.276360 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.277043 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.279776 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ltjw5" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.290523 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.293677 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/86635827-026c-4145-9130-3c300da69963-kube-api-access-tfm8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fhr46\" (UID: \"86635827-026c-4145-9130-3c300da69963\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.293745 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj28j\" (UniqueName: \"kubernetes.io/projected/65e46d7b-9b3f-447b-91da-35322d406623-kube-api-access-qj28j\") pod \"cinder-operator-controller-manager-8d874c8fc-sxbgc\" (UID: \"65e46d7b-9b3f-447b-91da-35322d406623\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.302737 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ms9gl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.304022 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.304740 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.308622 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-q5285" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.312237 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.313550 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.315747 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8ckrv" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.324163 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.370311 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.395680 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.396215 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/86635827-026c-4145-9130-3c300da69963-kube-api-access-tfm8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fhr46\" (UID: \"86635827-026c-4145-9130-3c300da69963\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.396280 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5nn\" (UniqueName: \"kubernetes.io/projected/362644b0-399b-4476-b8f7-9723011b9053-kube-api-access-sq5nn\") pod \"glance-operator-controller-manager-8886f4c47-stnhl\" (UID: \"362644b0-399b-4476-b8f7-9723011b9053\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.396378 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj28j\" (UniqueName: \"kubernetes.io/projected/65e46d7b-9b3f-447b-91da-35322d406623-kube-api-access-qj28j\") pod \"cinder-operator-controller-manager-8d874c8fc-sxbgc\" (UID: \"65e46d7b-9b3f-447b-91da-35322d406623\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.396404 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mj8\" (UniqueName: \"kubernetes.io/projected/3bb04651-3f3e-4f0a-8822-11279a338e20-kube-api-access-b5mj8\") pod \"designate-operator-controller-manager-6d9697b7f4-hwkc4\" (UID: \"3bb04651-3f3e-4f0a-8822-11279a338e20\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.409375 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.410094 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.413349 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j22qw" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.418275 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.431956 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfm8k\" (UniqueName: \"kubernetes.io/projected/86635827-026c-4145-9130-3c300da69963-kube-api-access-tfm8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-fhr46\" (UID: \"86635827-026c-4145-9130-3c300da69963\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.441083 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj28j\" (UniqueName: \"kubernetes.io/projected/65e46d7b-9b3f-447b-91da-35322d406623-kube-api-access-qj28j\") pod \"cinder-operator-controller-manager-8d874c8fc-sxbgc\" (UID: \"65e46d7b-9b3f-447b-91da-35322d406623\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.444362 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.445290 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.447661 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kwm5w" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.464343 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.501723 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5nn\" (UniqueName: \"kubernetes.io/projected/362644b0-399b-4476-b8f7-9723011b9053-kube-api-access-sq5nn\") pod \"glance-operator-controller-manager-8886f4c47-stnhl\" (UID: \"362644b0-399b-4476-b8f7-9723011b9053\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.501881 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mj8\" (UniqueName: \"kubernetes.io/projected/3bb04651-3f3e-4f0a-8822-11279a338e20-kube-api-access-b5mj8\") pod \"designate-operator-controller-manager-6d9697b7f4-hwkc4\" (UID: \"3bb04651-3f3e-4f0a-8822-11279a338e20\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.515543 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.532108 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mj8\" (UniqueName: \"kubernetes.io/projected/3bb04651-3f3e-4f0a-8822-11279a338e20-kube-api-access-b5mj8\") pod \"designate-operator-controller-manager-6d9697b7f4-hwkc4\" (UID: \"3bb04651-3f3e-4f0a-8822-11279a338e20\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.532286 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5nn\" (UniqueName: \"kubernetes.io/projected/362644b0-399b-4476-b8f7-9723011b9053-kube-api-access-sq5nn\") pod \"glance-operator-controller-manager-8886f4c47-stnhl\" (UID: \"362644b0-399b-4476-b8f7-9723011b9053\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.556634 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.556675 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.557386 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.557812 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.562853 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.563290 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tnqcs" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.563440 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wln5r" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.569867 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.570620 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.573144 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4z7qr" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.583226 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.587796 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.594834 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.606026 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvv68\" (UniqueName: \"kubernetes.io/projected/718025b3-0dfa-4c50-a020-8fc030f6061c-kube-api-access-jvv68\") pod \"horizon-operator-controller-manager-5fb775575f-g9w8f\" (UID: \"718025b3-0dfa-4c50-a020-8fc030f6061c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.609222 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n662\" (UniqueName: \"kubernetes.io/projected/af50abdc-12fd-4e29-b6ce-804f91e185f5-kube-api-access-2n662\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.609289 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.609308 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9t5f\" (UniqueName: \"kubernetes.io/projected/b3816529-aae3-447c-b497-027d78669856-kube-api-access-k9t5f\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pb5zg\" (UID: \"b3816529-aae3-447c-b497-027d78669856\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.609349 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dcz\" (UniqueName: \"kubernetes.io/projected/b449c147-de4b-4503-b680-86e2a43715e2-kube-api-access-26dcz\") pod \"heat-operator-controller-manager-69d6db494d-cln6d\" (UID: \"b449c147-de4b-4503-b680-86e2a43715e2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.609368 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7svs\" (UniqueName: \"kubernetes.io/projected/e9033b55-edfc-440d-bd2c-fa027d27f034-kube-api-access-q7svs\") pod \"keystone-operator-controller-manager-84f48565d4-xmsgv\" (UID: \"e9033b55-edfc-440d-bd2c-fa027d27f034\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.614596 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.615555 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.616355 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.635806 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jvx8k" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.636461 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.636752 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.648649 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.657605 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.658321 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.662407 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.662635 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-h2d6s" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.674958 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.675988 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.689434 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.696144 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9nlht" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.709300 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711180 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvv68\" (UniqueName: \"kubernetes.io/projected/718025b3-0dfa-4c50-a020-8fc030f6061c-kube-api-access-jvv68\") pod \"horizon-operator-controller-manager-5fb775575f-g9w8f\" (UID: \"718025b3-0dfa-4c50-a020-8fc030f6061c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711381 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9f55\" (UniqueName: \"kubernetes.io/projected/08ce9496-06f2-4a40-aac7-eaddbc4eb617-kube-api-access-s9f55\") pod \"mariadb-operator-controller-manager-67bf948998-xw5rw\" (UID: \"08ce9496-06f2-4a40-aac7-eaddbc4eb617\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711510 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n662\" (UniqueName: \"kubernetes.io/projected/af50abdc-12fd-4e29-b6ce-804f91e185f5-kube-api-access-2n662\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711541 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711557 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9t5f\" (UniqueName: \"kubernetes.io/projected/b3816529-aae3-447c-b497-027d78669856-kube-api-access-k9t5f\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pb5zg\" (UID: \"b3816529-aae3-447c-b497-027d78669856\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.715126 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26dcz\" (UniqueName: \"kubernetes.io/projected/b449c147-de4b-4503-b680-86e2a43715e2-kube-api-access-26dcz\") pod \"heat-operator-controller-manager-69d6db494d-cln6d\" (UID: \"b449c147-de4b-4503-b680-86e2a43715e2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.715171 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7svs\" (UniqueName: \"kubernetes.io/projected/e9033b55-edfc-440d-bd2c-fa027d27f034-kube-api-access-q7svs\") pod \"keystone-operator-controller-manager-84f48565d4-xmsgv\" (UID: \"e9033b55-edfc-440d-bd2c-fa027d27f034\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.715210 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtlf\" (UniqueName: \"kubernetes.io/projected/1126de8e-d0ae-4d0d-a7d3-cad73f6cc672-kube-api-access-wmtlf\") pod \"neutron-operator-controller-manager-585dbc889-6mv9v\" (UID: \"1126de8e-d0ae-4d0d-a7d3-cad73f6cc672\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:21 crc kubenswrapper[4644]: E0204 08:55:21.712245 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:21 crc kubenswrapper[4644]: E0204 08:55:21.715292 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:22.215268978 +0000 UTC m=+832.255326733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.715351 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sh2w\" (UniqueName: \"kubernetes.io/projected/f1aab4ac-082c-4c69-94c8-6291514178b7-kube-api-access-9sh2w\") pod \"manila-operator-controller-manager-7dd968899f-t5sv7\" (UID: \"f1aab4ac-082c-4c69-94c8-6291514178b7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.711935 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.740943 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.741779 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.762006 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z7dg8" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.762178 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.762197 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ss22n" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.771799 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvv68\" (UniqueName: \"kubernetes.io/projected/718025b3-0dfa-4c50-a020-8fc030f6061c-kube-api-access-jvv68\") pod \"horizon-operator-controller-manager-5fb775575f-g9w8f\" (UID: \"718025b3-0dfa-4c50-a020-8fc030f6061c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.771900 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9t5f\" (UniqueName: \"kubernetes.io/projected/b3816529-aae3-447c-b497-027d78669856-kube-api-access-k9t5f\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pb5zg\" (UID: \"b3816529-aae3-447c-b497-027d78669856\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.783831 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dcz\" (UniqueName: \"kubernetes.io/projected/b449c147-de4b-4503-b680-86e2a43715e2-kube-api-access-26dcz\") pod \"heat-operator-controller-manager-69d6db494d-cln6d\" (UID: \"b449c147-de4b-4503-b680-86e2a43715e2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.783861 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n662\" (UniqueName: \"kubernetes.io/projected/af50abdc-12fd-4e29-b6ce-804f91e185f5-kube-api-access-2n662\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.796689 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7svs\" (UniqueName: \"kubernetes.io/projected/e9033b55-edfc-440d-bd2c-fa027d27f034-kube-api-access-q7svs\") pod \"keystone-operator-controller-manager-84f48565d4-xmsgv\" (UID: \"e9033b55-edfc-440d-bd2c-fa027d27f034\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.803708 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.810309 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.817009 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9f55\" (UniqueName: \"kubernetes.io/projected/08ce9496-06f2-4a40-aac7-eaddbc4eb617-kube-api-access-s9f55\") pod \"mariadb-operator-controller-manager-67bf948998-xw5rw\" (UID: \"08ce9496-06f2-4a40-aac7-eaddbc4eb617\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.817158 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmtlf\" (UniqueName: \"kubernetes.io/projected/1126de8e-d0ae-4d0d-a7d3-cad73f6cc672-kube-api-access-wmtlf\") pod \"neutron-operator-controller-manager-585dbc889-6mv9v\" (UID: \"1126de8e-d0ae-4d0d-a7d3-cad73f6cc672\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.817281 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sh2w\" (UniqueName: \"kubernetes.io/projected/f1aab4ac-082c-4c69-94c8-6291514178b7-kube-api-access-9sh2w\") pod \"manila-operator-controller-manager-7dd968899f-t5sv7\" (UID: \"f1aab4ac-082c-4c69-94c8-6291514178b7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.817411 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7wh\" (UniqueName: \"kubernetes.io/projected/6f482e24-1f12-48bd-8944-93b1e7ee2d76-kube-api-access-qb7wh\") pod \"nova-operator-controller-manager-55bff696bd-v6q27\" (UID: \"6f482e24-1f12-48bd-8944-93b1e7ee2d76\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.817495 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frf2\" (UniqueName: \"kubernetes.io/projected/0d5154cd-bccf-4112-a9b5-df0cf8375905-kube-api-access-5frf2\") pod \"octavia-operator-controller-manager-6687f8d877-9n6pj\" (UID: \"0d5154cd-bccf-4112-a9b5-df0cf8375905\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.862108 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.873393 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.876895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9f55\" (UniqueName: \"kubernetes.io/projected/08ce9496-06f2-4a40-aac7-eaddbc4eb617-kube-api-access-s9f55\") pod \"mariadb-operator-controller-manager-67bf948998-xw5rw\" (UID: \"08ce9496-06f2-4a40-aac7-eaddbc4eb617\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.885089 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qxtr6" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.887719 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.888735 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.894640 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xphph" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.894797 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.895214 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmtlf\" (UniqueName: \"kubernetes.io/projected/1126de8e-d0ae-4d0d-a7d3-cad73f6cc672-kube-api-access-wmtlf\") pod \"neutron-operator-controller-manager-585dbc889-6mv9v\" (UID: \"1126de8e-d0ae-4d0d-a7d3-cad73f6cc672\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.902341 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sh2w\" (UniqueName: \"kubernetes.io/projected/f1aab4ac-082c-4c69-94c8-6291514178b7-kube-api-access-9sh2w\") pod \"manila-operator-controller-manager-7dd968899f-t5sv7\" (UID: \"f1aab4ac-082c-4c69-94c8-6291514178b7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.919623 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrrd\" (UniqueName: \"kubernetes.io/projected/d92e25ae-9963-4073-9b4e-66f4aafff7a6-kube-api-access-2vrrd\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.919667 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.919694 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7wh\" (UniqueName: \"kubernetes.io/projected/6f482e24-1f12-48bd-8944-93b1e7ee2d76-kube-api-access-qb7wh\") pod \"nova-operator-controller-manager-55bff696bd-v6q27\" (UID: \"6f482e24-1f12-48bd-8944-93b1e7ee2d76\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.919714 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5frf2\" (UniqueName: \"kubernetes.io/projected/0d5154cd-bccf-4112-a9b5-df0cf8375905-kube-api-access-5frf2\") pod \"octavia-operator-controller-manager-6687f8d877-9n6pj\" (UID: \"0d5154cd-bccf-4112-a9b5-df0cf8375905\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.919842 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295n4\" (UniqueName: \"kubernetes.io/projected/dca5895b-8bfa-4060-a60d-79e37d0eefe6-kube-api-access-295n4\") pod \"ovn-operator-controller-manager-788c46999f-hp2fd\" (UID: \"dca5895b-8bfa-4060-a60d-79e37d0eefe6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.920311 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.964155 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5frf2\" (UniqueName: \"kubernetes.io/projected/0d5154cd-bccf-4112-a9b5-df0cf8375905-kube-api-access-5frf2\") pod \"octavia-operator-controller-manager-6687f8d877-9n6pj\" (UID: \"0d5154cd-bccf-4112-a9b5-df0cf8375905\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.973636 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.985481 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7wh\" (UniqueName: \"kubernetes.io/projected/6f482e24-1f12-48bd-8944-93b1e7ee2d76-kube-api-access-qb7wh\") pod \"nova-operator-controller-manager-55bff696bd-v6q27\" (UID: \"6f482e24-1f12-48bd-8944-93b1e7ee2d76\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.992498 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9"] Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.993475 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:21 crc kubenswrapper[4644]: I0204 08:55:21.997256 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hh6mm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.012001 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.012843 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.018553 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hg7f5" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.020745 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295n4\" (UniqueName: \"kubernetes.io/projected/dca5895b-8bfa-4060-a60d-79e37d0eefe6-kube-api-access-295n4\") pod \"ovn-operator-controller-manager-788c46999f-hp2fd\" (UID: \"dca5895b-8bfa-4060-a60d-79e37d0eefe6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.020808 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrrd\" (UniqueName: \"kubernetes.io/projected/d92e25ae-9963-4073-9b4e-66f4aafff7a6-kube-api-access-2vrrd\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.020829 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.020861 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvpq\" (UniqueName: \"kubernetes.io/projected/b74f9275-a7ff-4b5f-a6e1-3adff65c8a71-kube-api-access-fwvpq\") pod \"swift-operator-controller-manager-68fc8c869-7jlm9\" (UID: \"b74f9275-a7ff-4b5f-a6e1-3adff65c8a71\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.020884 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsl75\" (UniqueName: \"kubernetes.io/projected/bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96-kube-api-access-qsl75\") pod \"telemetry-operator-controller-manager-64b5b76f97-tc45m\" (UID: \"bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.021401 4644 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.021441 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert podName:d92e25ae-9963-4073-9b4e-66f4aafff7a6 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:22.521427099 +0000 UTC m=+832.561484854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" (UID: "d92e25ae-9963-4073-9b4e-66f4aafff7a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.024735 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.025936 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.028289 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fhmdb" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.028385 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.035194 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.039808 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.056746 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrrd\" (UniqueName: \"kubernetes.io/projected/d92e25ae-9963-4073-9b4e-66f4aafff7a6-kube-api-access-2vrrd\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.072408 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.074975 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.080587 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.086273 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295n4\" (UniqueName: \"kubernetes.io/projected/dca5895b-8bfa-4060-a60d-79e37d0eefe6-kube-api-access-295n4\") pod \"ovn-operator-controller-manager-788c46999f-hp2fd\" (UID: \"dca5895b-8bfa-4060-a60d-79e37d0eefe6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.089616 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.104859 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.119078 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.136072 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58scd\" (UniqueName: \"kubernetes.io/projected/e6482c44-8c91-4931-aceb-b18c7418a6c4-kube-api-access-58scd\") pod \"placement-operator-controller-manager-5b964cf4cd-4r2z6\" (UID: \"e6482c44-8c91-4931-aceb-b18c7418a6c4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.136578 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwvpq\" (UniqueName: \"kubernetes.io/projected/b74f9275-a7ff-4b5f-a6e1-3adff65c8a71-kube-api-access-fwvpq\") pod \"swift-operator-controller-manager-68fc8c869-7jlm9\" (UID: \"b74f9275-a7ff-4b5f-a6e1-3adff65c8a71\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.136606 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsl75\" (UniqueName: \"kubernetes.io/projected/bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96-kube-api-access-qsl75\") pod \"telemetry-operator-controller-manager-64b5b76f97-tc45m\" (UID: \"bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.137415 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.185097 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.188409 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwvpq\" (UniqueName: \"kubernetes.io/projected/b74f9275-a7ff-4b5f-a6e1-3adff65c8a71-kube-api-access-fwvpq\") pod \"swift-operator-controller-manager-68fc8c869-7jlm9\" (UID: \"b74f9275-a7ff-4b5f-a6e1-3adff65c8a71\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.198784 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsl75\" (UniqueName: \"kubernetes.io/projected/bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96-kube-api-access-qsl75\") pod \"telemetry-operator-controller-manager-64b5b76f97-tc45m\" (UID: \"bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.198861 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.200409 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.206008 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.241009 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58scd\" (UniqueName: \"kubernetes.io/projected/e6482c44-8c91-4931-aceb-b18c7418a6c4-kube-api-access-58scd\") pod \"placement-operator-controller-manager-5b964cf4cd-4r2z6\" (UID: \"e6482c44-8c91-4931-aceb-b18c7418a6c4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.259751 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.259965 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.260055 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:23.260039441 +0000 UTC m=+833.300097196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.268485 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.273818 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bxr6j" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.276133 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.287439 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-8l8s8"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.288430 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.299758 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wr42d" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.303989 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58scd\" (UniqueName: \"kubernetes.io/projected/e6482c44-8c91-4931-aceb-b18c7418a6c4-kube-api-access-58scd\") pod \"placement-operator-controller-manager-5b964cf4cd-4r2z6\" (UID: \"e6482c44-8c91-4931-aceb-b18c7418a6c4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.323458 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.324752 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-8l8s8"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.361149 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjh8z\" (UniqueName: \"kubernetes.io/projected/277bd37d-6c35-4b57-b7bd-b6bb3f1043fe-kube-api-access-xjh8z\") pod \"watcher-operator-controller-manager-564965969-8l8s8\" (UID: \"277bd37d-6c35-4b57-b7bd-b6bb3f1043fe\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.361566 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xcv\" (UniqueName: \"kubernetes.io/projected/8b00283c-6f66-489b-b929-bbd1a5706b67-kube-api-access-v6xcv\") pod \"test-operator-controller-manager-56f8bfcd9f-9msfm\" (UID: \"8b00283c-6f66-489b-b929-bbd1a5706b67\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.385936 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.387053 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.388916 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-psgrp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.389175 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.396814 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.424596 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.467117 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjh8z\" (UniqueName: \"kubernetes.io/projected/277bd37d-6c35-4b57-b7bd-b6bb3f1043fe-kube-api-access-xjh8z\") pod \"watcher-operator-controller-manager-564965969-8l8s8\" (UID: \"277bd37d-6c35-4b57-b7bd-b6bb3f1043fe\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.467173 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.467285 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6xcv\" (UniqueName: \"kubernetes.io/projected/8b00283c-6f66-489b-b929-bbd1a5706b67-kube-api-access-v6xcv\") pod \"test-operator-controller-manager-56f8bfcd9f-9msfm\" (UID: \"8b00283c-6f66-489b-b929-bbd1a5706b67\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.467357 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdpl\" (UniqueName: \"kubernetes.io/projected/ddb47eef-c05a-40c3-8d94-dd9187b61267-kube-api-access-ljdpl\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.467380 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.504994 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6xcv\" (UniqueName: \"kubernetes.io/projected/8b00283c-6f66-489b-b929-bbd1a5706b67-kube-api-access-v6xcv\") pod \"test-operator-controller-manager-56f8bfcd9f-9msfm\" (UID: \"8b00283c-6f66-489b-b929-bbd1a5706b67\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.511768 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.512660 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.514196 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.526265 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wn5tz" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.529438 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjh8z\" (UniqueName: \"kubernetes.io/projected/277bd37d-6c35-4b57-b7bd-b6bb3f1043fe-kube-api-access-xjh8z\") pod \"watcher-operator-controller-manager-564965969-8l8s8\" (UID: \"277bd37d-6c35-4b57-b7bd-b6bb3f1043fe\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.555613 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.572886 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcnb\" (UniqueName: \"kubernetes.io/projected/9e6331c7-8b94-4ded-92d0-e9db7bbd45ec-kube-api-access-kzcnb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgccb\" (UID: \"9e6331c7-8b94-4ded-92d0-e9db7bbd45ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.572944 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdpl\" (UniqueName: \"kubernetes.io/projected/ddb47eef-c05a-40c3-8d94-dd9187b61267-kube-api-access-ljdpl\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.572966 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.573007 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.573072 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573174 4644 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573209 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert podName:d92e25ae-9963-4073-9b4e-66f4aafff7a6 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:23.573196995 +0000 UTC m=+833.613254750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" (UID: "d92e25ae-9963-4073-9b4e-66f4aafff7a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573616 4644 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573639 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:23.073631086 +0000 UTC m=+833.113688841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "metrics-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573667 4644 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: E0204 08:55:22.573684 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:23.073678348 +0000 UTC m=+833.113736103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "webhook-server-cert" not found Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.619114 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.625291 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdpl\" (UniqueName: \"kubernetes.io/projected/ddb47eef-c05a-40c3-8d94-dd9187b61267-kube-api-access-ljdpl\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.630072 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.671383 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.673862 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcnb\" (UniqueName: \"kubernetes.io/projected/9e6331c7-8b94-4ded-92d0-e9db7bbd45ec-kube-api-access-kzcnb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgccb\" (UID: \"9e6331c7-8b94-4ded-92d0-e9db7bbd45ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" Feb 04 08:55:22 crc kubenswrapper[4644]: W0204 08:55:22.696835 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod362644b0_399b_4476_b8f7_9723011b9053.slice/crio-8fbf8c2d3f4f10d99dbcc6a4951ade9218776b29e3a296011c2046b277a25843 WatchSource:0}: Error finding container 8fbf8c2d3f4f10d99dbcc6a4951ade9218776b29e3a296011c2046b277a25843: Status 404 returned error can't find the container with id 8fbf8c2d3f4f10d99dbcc6a4951ade9218776b29e3a296011c2046b277a25843 Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.699249 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46"] Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.713238 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.728465 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcnb\" (UniqueName: \"kubernetes.io/projected/9e6331c7-8b94-4ded-92d0-e9db7bbd45ec-kube-api-access-kzcnb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vgccb\" (UID: \"9e6331c7-8b94-4ded-92d0-e9db7bbd45ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.971578 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" Feb 04 08:55:22 crc kubenswrapper[4644]: I0204 08:55:22.993477 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4"] Feb 04 08:55:23 crc kubenswrapper[4644]: W0204 08:55:23.057930 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb04651_3f3e_4f0a_8822_11279a338e20.slice/crio-f1f5ab8c9d82815673dbb42931c1ea088f5ea3298b3e3667be0519d05175ef62 WatchSource:0}: Error finding container f1f5ab8c9d82815673dbb42931c1ea088f5ea3298b3e3667be0519d05175ef62: Status 404 returned error can't find the container with id f1f5ab8c9d82815673dbb42931c1ea088f5ea3298b3e3667be0519d05175ef62 Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.093939 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.094044 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.094417 4644 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.094471 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:24.094454814 +0000 UTC m=+834.134512569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "metrics-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.094682 4644 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.094897 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:24.094701091 +0000 UTC m=+834.134758846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.297559 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.297961 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.298029 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:25.298014387 +0000 UTC m=+835.338072142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.414180 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" event={"ID":"362644b0-399b-4476-b8f7-9723011b9053","Type":"ContainerStarted","Data":"8fbf8c2d3f4f10d99dbcc6a4951ade9218776b29e3a296011c2046b277a25843"} Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.415401 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" event={"ID":"86635827-026c-4145-9130-3c300da69963","Type":"ContainerStarted","Data":"fc2ff63a38dda6cc7666933f2154e1060430e5ac7e029ea0c7ccd08b6db61505"} Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.416936 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" event={"ID":"3bb04651-3f3e-4f0a-8822-11279a338e20","Type":"ContainerStarted","Data":"f1f5ab8c9d82815673dbb42931c1ea088f5ea3298b3e3667be0519d05175ef62"} Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.418885 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" event={"ID":"65e46d7b-9b3f-447b-91da-35322d406623","Type":"ContainerStarted","Data":"4b3c73141c9131fe3a9ea7a2ec482736d35585cd6e77d45806a71f9970a87235"} Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.540674 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.556135 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.573965 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.612409 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.642910 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.643423 4644 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.643477 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert podName:d92e25ae-9963-4073-9b4e-66f4aafff7a6 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:25.643460994 +0000 UTC m=+835.683518739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" (UID: "d92e25ae-9963-4073-9b4e-66f4aafff7a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.684999 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.697692 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv"] Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.706274 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wmtlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-6mv9v_openstack-operators(1126de8e-d0ae-4d0d-a7d3-cad73f6cc672): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.708554 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" podUID="1126de8e-d0ae-4d0d-a7d3-cad73f6cc672" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.709526 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-9msfm_openstack-operators(8b00283c-6f66-489b-b929-bbd1a5706b67): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.710859 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" podUID="8b00283c-6f66-489b-b929-bbd1a5706b67" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.712434 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsl75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-tc45m_openstack-operators(bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.713941 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" podUID="bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96" Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.722195 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.739767 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.752192 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.758552 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.764516 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.769448 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.773592 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.810223 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-8l8s8"] Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.817457 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6"] Feb 04 08:55:23 crc kubenswrapper[4644]: W0204 08:55:23.827496 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277bd37d_6c35_4b57_b7bd_b6bb3f1043fe.slice/crio-2129c92d6059d1382ddb035088b2eabb561e806cdfd931e25bdd5adb62b02958 WatchSource:0}: Error finding container 2129c92d6059d1382ddb035088b2eabb561e806cdfd931e25bdd5adb62b02958: Status 404 returned error can't find the container with id 2129c92d6059d1382ddb035088b2eabb561e806cdfd931e25bdd5adb62b02958 Feb 04 08:55:23 crc kubenswrapper[4644]: W0204 08:55:23.830700 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6482c44_8c91_4931_aceb_b18c7418a6c4.slice/crio-b209aaf9462d7ee326b5f31839bc639650e63614ece537ad50663e61544c75e4 WatchSource:0}: Error finding container b209aaf9462d7ee326b5f31839bc639650e63614ece537ad50663e61544c75e4: Status 404 returned error can't find the container with id b209aaf9462d7ee326b5f31839bc639650e63614ece537ad50663e61544c75e4 Feb 04 08:55:23 crc kubenswrapper[4644]: I0204 08:55:23.830827 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb"] Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.834038 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58scd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-4r2z6_openstack-operators(e6482c44-8c91-4931-aceb-b18c7418a6c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.835302 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" podUID="e6482c44-8c91-4931-aceb-b18c7418a6c4" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.843877 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzcnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vgccb_openstack-operators(9e6331c7-8b94-4ded-92d0-e9db7bbd45ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 08:55:23 crc kubenswrapper[4644]: E0204 08:55:23.846074 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" podUID="9e6331c7-8b94-4ded-92d0-e9db7bbd45ec" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.149170 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.149242 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.149408 4644 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.149466 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:26.149447446 +0000 UTC m=+836.189505241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "webhook-server-cert" not found Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.149582 4644 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.149674 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:26.149650102 +0000 UTC m=+836.189707857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "metrics-server-cert" not found Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.430702 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" event={"ID":"dca5895b-8bfa-4060-a60d-79e37d0eefe6","Type":"ContainerStarted","Data":"f1cca79c21d05a76ceb1c4098ad0f15cb6172b42b4a51a3d751f1a723ff1326e"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.434204 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" event={"ID":"08ce9496-06f2-4a40-aac7-eaddbc4eb617","Type":"ContainerStarted","Data":"c84c0cc89a4780560bf486cf169adfd669b44254a7d55059d82f355a6c039515"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.436603 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" event={"ID":"bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96","Type":"ContainerStarted","Data":"e0688722ba7a1d057fde6677732f973a9b6a4d4471a785be2ff3d108c31fc4be"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.442374 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" event={"ID":"e9033b55-edfc-440d-bd2c-fa027d27f034","Type":"ContainerStarted","Data":"916df31608f7712a3781bd7cecd5c05ab19ffd3aadafa27e7d5a9d8a54ee32dc"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.443601 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" event={"ID":"6f482e24-1f12-48bd-8944-93b1e7ee2d76","Type":"ContainerStarted","Data":"e5612779e3db937d8c654e16fe3adad2fad47e4db850f290c69dfe91aed28a36"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.446499 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" event={"ID":"0d5154cd-bccf-4112-a9b5-df0cf8375905","Type":"ContainerStarted","Data":"b334814dad979a9c76fcc83ed74059165b6a3b2d1b51f26e456cf9a2debc87b5"} Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.447404 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" podUID="bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.449400 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" event={"ID":"f1aab4ac-082c-4c69-94c8-6291514178b7","Type":"ContainerStarted","Data":"a3cd4bbc996c5ecd461995aeac0f477606035c90ac4bf3bc90cc27b379e37370"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.455647 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" event={"ID":"b449c147-de4b-4503-b680-86e2a43715e2","Type":"ContainerStarted","Data":"282d2523c98a9480e03b4ac8491a543c72424274789b31b0221a4233f1402958"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.480203 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" event={"ID":"9e6331c7-8b94-4ded-92d0-e9db7bbd45ec","Type":"ContainerStarted","Data":"f7bb6359810a7e8fb201e285b90cdcf133460c70de36d8643fdb5a86a6b4ac71"} Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.481991 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" podUID="9e6331c7-8b94-4ded-92d0-e9db7bbd45ec" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.493972 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" event={"ID":"1126de8e-d0ae-4d0d-a7d3-cad73f6cc672","Type":"ContainerStarted","Data":"97e4f990bb0e5d37515780545fd48cb9acdb7c0a05751edd777809071710a549"} Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.497162 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" podUID="1126de8e-d0ae-4d0d-a7d3-cad73f6cc672" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.503789 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" event={"ID":"8b00283c-6f66-489b-b929-bbd1a5706b67","Type":"ContainerStarted","Data":"9a8f0b2df9589c3b93ffb2c2ce17751ae517697c1e013dfb329db64c55abd85e"} Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.506888 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" podUID="8b00283c-6f66-489b-b929-bbd1a5706b67" Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.518928 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" event={"ID":"277bd37d-6c35-4b57-b7bd-b6bb3f1043fe","Type":"ContainerStarted","Data":"2129c92d6059d1382ddb035088b2eabb561e806cdfd931e25bdd5adb62b02958"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.521255 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" event={"ID":"b3816529-aae3-447c-b497-027d78669856","Type":"ContainerStarted","Data":"38ac3a31239ad6d4d0cf111d14c82b21e0d021f23c911f6b607280a1a197b9b8"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.525775 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" event={"ID":"b74f9275-a7ff-4b5f-a6e1-3adff65c8a71","Type":"ContainerStarted","Data":"8401f05d3e40fbc70d74a5251793e00a19622c19bf206b9610d7122a837ee57c"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.527812 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" event={"ID":"718025b3-0dfa-4c50-a020-8fc030f6061c","Type":"ContainerStarted","Data":"6b567099136fec4d1d9722fb01fbde00df36451919616146a0bcd7c3a6258238"} Feb 04 08:55:24 crc kubenswrapper[4644]: I0204 08:55:24.537681 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" event={"ID":"e6482c44-8c91-4931-aceb-b18c7418a6c4","Type":"ContainerStarted","Data":"b209aaf9462d7ee326b5f31839bc639650e63614ece537ad50663e61544c75e4"} Feb 04 08:55:24 crc kubenswrapper[4644]: E0204 08:55:24.539792 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" podUID="e6482c44-8c91-4931-aceb-b18c7418a6c4" Feb 04 08:55:25 crc kubenswrapper[4644]: I0204 08:55:25.371876 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.372073 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.372382 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:29.372303552 +0000 UTC m=+839.412361357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.550379 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" podUID="9e6331c7-8b94-4ded-92d0-e9db7bbd45ec" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.550618 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" podUID="8b00283c-6f66-489b-b929-bbd1a5706b67" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.550654 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" podUID="e6482c44-8c91-4931-aceb-b18c7418a6c4" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.550686 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" podUID="1126de8e-d0ae-4d0d-a7d3-cad73f6cc672" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.550712 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" podUID="bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96" Feb 04 08:55:25 crc kubenswrapper[4644]: I0204 08:55:25.677027 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.677903 4644 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:25 crc kubenswrapper[4644]: E0204 08:55:25.677949 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert podName:d92e25ae-9963-4073-9b4e-66f4aafff7a6 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:29.677936369 +0000 UTC m=+839.717994124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" (UID: "d92e25ae-9963-4073-9b4e-66f4aafff7a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:26 crc kubenswrapper[4644]: I0204 08:55:26.184884 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:26 crc kubenswrapper[4644]: I0204 08:55:26.185048 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:26 crc kubenswrapper[4644]: E0204 08:55:26.185214 4644 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 08:55:26 crc kubenswrapper[4644]: E0204 08:55:26.185296 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:30.185277238 +0000 UTC m=+840.225335003 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "metrics-server-cert" not found Feb 04 08:55:26 crc kubenswrapper[4644]: E0204 08:55:26.185304 4644 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 08:55:26 crc kubenswrapper[4644]: E0204 08:55:26.185347 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:30.18533809 +0000 UTC m=+840.225395845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "webhook-server-cert" not found Feb 04 08:55:29 crc kubenswrapper[4644]: I0204 08:55:29.429985 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:29 crc kubenswrapper[4644]: E0204 08:55:29.430178 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:29 crc kubenswrapper[4644]: E0204 08:55:29.430459 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:37.430440107 +0000 UTC m=+847.470497862 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:29 crc kubenswrapper[4644]: I0204 08:55:29.734966 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:29 crc kubenswrapper[4644]: E0204 08:55:29.735162 4644 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:29 crc kubenswrapper[4644]: E0204 08:55:29.735228 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert podName:d92e25ae-9963-4073-9b4e-66f4aafff7a6 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:37.73520876 +0000 UTC m=+847.775266515 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" (UID: "d92e25ae-9963-4073-9b4e-66f4aafff7a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 08:55:30 crc kubenswrapper[4644]: I0204 08:55:30.241373 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:30 crc kubenswrapper[4644]: I0204 08:55:30.241451 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:30 crc kubenswrapper[4644]: E0204 08:55:30.241602 4644 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 08:55:30 crc kubenswrapper[4644]: E0204 08:55:30.241671 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:38.241653924 +0000 UTC m=+848.281711679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "webhook-server-cert" not found Feb 04 08:55:30 crc kubenswrapper[4644]: E0204 08:55:30.241602 4644 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 08:55:30 crc kubenswrapper[4644]: E0204 08:55:30.241827 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs podName:ddb47eef-c05a-40c3-8d94-dd9187b61267 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:38.241813359 +0000 UTC m=+848.281871194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs") pod "openstack-operator-controller-manager-69b675f8c4-g2gnp" (UID: "ddb47eef-c05a-40c3-8d94-dd9187b61267") : secret "metrics-server-cert" not found Feb 04 08:55:36 crc kubenswrapper[4644]: E0204 08:55:36.508907 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Feb 04 08:55:36 crc kubenswrapper[4644]: E0204 08:55:36.510888 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjh8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-8l8s8_openstack-operators(277bd37d-6c35-4b57-b7bd-b6bb3f1043fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:36 crc kubenswrapper[4644]: E0204 08:55:36.512385 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" podUID="277bd37d-6c35-4b57-b7bd-b6bb3f1043fe" Feb 04 08:55:36 crc kubenswrapper[4644]: E0204 08:55:36.626358 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" podUID="277bd37d-6c35-4b57-b7bd-b6bb3f1043fe" Feb 04 08:55:37 crc kubenswrapper[4644]: I0204 08:55:37.450796 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:37 crc kubenswrapper[4644]: E0204 08:55:37.451011 4644 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:37 crc kubenswrapper[4644]: E0204 08:55:37.451104 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert podName:af50abdc-12fd-4e29-b6ce-804f91e185f5 nodeName:}" failed. No retries permitted until 2026-02-04 08:55:53.451081184 +0000 UTC m=+863.491139009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert") pod "infra-operator-controller-manager-79955696d6-6ldzh" (UID: "af50abdc-12fd-4e29-b6ce-804f91e185f5") : secret "infra-operator-webhook-server-cert" not found Feb 04 08:55:37 crc kubenswrapper[4644]: I0204 08:55:37.754934 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:37 crc kubenswrapper[4644]: I0204 08:55:37.760966 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92e25ae-9963-4073-9b4e-66f4aafff7a6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d777fx\" (UID: \"d92e25ae-9963-4073-9b4e-66f4aafff7a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:37 crc kubenswrapper[4644]: I0204 08:55:37.888931 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xphph" Feb 04 08:55:37 crc kubenswrapper[4644]: I0204 08:55:37.896662 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.262256 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.262420 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.287354 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-metrics-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.287470 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ddb47eef-c05a-40c3-8d94-dd9187b61267-webhook-certs\") pod \"openstack-operator-controller-manager-69b675f8c4-g2gnp\" (UID: \"ddb47eef-c05a-40c3-8d94-dd9187b61267\") " pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.498574 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-psgrp" Feb 04 08:55:38 crc kubenswrapper[4644]: I0204 08:55:38.507347 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:38 crc kubenswrapper[4644]: E0204 08:55:38.717112 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 04 08:55:38 crc kubenswrapper[4644]: E0204 08:55:38.717355 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-xw5rw_openstack-operators(08ce9496-06f2-4a40-aac7-eaddbc4eb617): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:38 crc kubenswrapper[4644]: E0204 08:55:38.719068 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" podUID="08ce9496-06f2-4a40-aac7-eaddbc4eb617" Feb 04 08:55:39 crc kubenswrapper[4644]: E0204 08:55:39.648545 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" podUID="08ce9496-06f2-4a40-aac7-eaddbc4eb617" Feb 04 08:55:39 crc kubenswrapper[4644]: E0204 08:55:39.977036 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Feb 04 08:55:39 crc kubenswrapper[4644]: E0204 08:55:39.977878 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj28j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-sxbgc_openstack-operators(65e46d7b-9b3f-447b-91da-35322d406623): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:39 crc kubenswrapper[4644]: E0204 08:55:39.979121 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" podUID="65e46d7b-9b3f-447b-91da-35322d406623" Feb 04 08:55:40 crc kubenswrapper[4644]: E0204 08:55:40.655513 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" podUID="65e46d7b-9b3f-447b-91da-35322d406623" Feb 04 08:55:41 crc kubenswrapper[4644]: E0204 08:55:41.764244 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 04 08:55:41 crc kubenswrapper[4644]: E0204 08:55:41.764717 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sh2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-t5sv7_openstack-operators(f1aab4ac-082c-4c69-94c8-6291514178b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:41 crc kubenswrapper[4644]: E0204 08:55:41.767166 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" podUID="f1aab4ac-082c-4c69-94c8-6291514178b7" Feb 04 08:55:42 crc kubenswrapper[4644]: E0204 08:55:42.444053 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 04 08:55:42 crc kubenswrapper[4644]: E0204 08:55:42.444214 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvv68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-g9w8f_openstack-operators(718025b3-0dfa-4c50-a020-8fc030f6061c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:42 crc kubenswrapper[4644]: E0204 08:55:42.445614 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" podUID="718025b3-0dfa-4c50-a020-8fc030f6061c" Feb 04 08:55:42 crc kubenswrapper[4644]: E0204 08:55:42.665547 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" podUID="718025b3-0dfa-4c50-a020-8fc030f6061c" Feb 04 08:55:42 crc kubenswrapper[4644]: E0204 08:55:42.668875 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" podUID="f1aab4ac-082c-4c69-94c8-6291514178b7" Feb 04 08:55:43 crc kubenswrapper[4644]: E0204 08:55:43.042441 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Feb 04 08:55:43 crc kubenswrapper[4644]: E0204 08:55:43.042610 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26dcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-cln6d_openstack-operators(b449c147-de4b-4503-b680-86e2a43715e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:43 crc kubenswrapper[4644]: E0204 08:55:43.045705 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" podUID="b449c147-de4b-4503-b680-86e2a43715e2" Feb 04 08:55:43 crc kubenswrapper[4644]: E0204 08:55:43.672188 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" podUID="b449c147-de4b-4503-b680-86e2a43715e2" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.367643 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.368181 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5frf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9n6pj_openstack-operators(0d5154cd-bccf-4112-a9b5-df0cf8375905): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.369373 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" podUID="0d5154cd-bccf-4112-a9b5-df0cf8375905" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.686703 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" podUID="0d5154cd-bccf-4112-a9b5-df0cf8375905" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.975369 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.975573 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7svs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-xmsgv_openstack-operators(e9033b55-edfc-440d-bd2c-fa027d27f034): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:45 crc kubenswrapper[4644]: E0204 08:55:45.976964 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" podUID="e9033b55-edfc-440d-bd2c-fa027d27f034" Feb 04 08:55:46 crc kubenswrapper[4644]: E0204 08:55:46.560045 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 04 08:55:46 crc kubenswrapper[4644]: E0204 08:55:46.560261 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qb7wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-v6q27_openstack-operators(6f482e24-1f12-48bd-8944-93b1e7ee2d76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:55:46 crc kubenswrapper[4644]: E0204 08:55:46.561488 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" podUID="6f482e24-1f12-48bd-8944-93b1e7ee2d76" Feb 04 08:55:46 crc kubenswrapper[4644]: E0204 08:55:46.690726 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" podUID="e9033b55-edfc-440d-bd2c-fa027d27f034" Feb 04 08:55:46 crc kubenswrapper[4644]: E0204 08:55:46.690795 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" podUID="6f482e24-1f12-48bd-8944-93b1e7ee2d76" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.261795 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx"] Feb 04 08:55:53 crc kubenswrapper[4644]: W0204 08:55:53.282920 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd92e25ae_9963_4073_9b4e_66f4aafff7a6.slice/crio-760df4f2cd457863ff77844d39081a68d26717affa53f4e0ea6a6032ebc127ad WatchSource:0}: Error finding container 760df4f2cd457863ff77844d39081a68d26717affa53f4e0ea6a6032ebc127ad: Status 404 returned error can't find the container with id 760df4f2cd457863ff77844d39081a68d26717affa53f4e0ea6a6032ebc127ad Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.389868 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp"] Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.507298 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.514154 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af50abdc-12fd-4e29-b6ce-804f91e185f5-cert\") pod \"infra-operator-controller-manager-79955696d6-6ldzh\" (UID: \"af50abdc-12fd-4e29-b6ce-804f91e185f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.730394 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tnqcs" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.734257 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" event={"ID":"86635827-026c-4145-9130-3c300da69963","Type":"ContainerStarted","Data":"da256a5c7738bf8155037f59e10ae4af438c46e2a1eb5eb10d20bcee51ffe024"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.735220 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.737074 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" event={"ID":"bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96","Type":"ContainerStarted","Data":"22ab72ea3f42230a481b711cc584854d3a870b0652854b40b41a15ddebe01895"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.737600 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.739476 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" event={"ID":"362644b0-399b-4476-b8f7-9723011b9053","Type":"ContainerStarted","Data":"fbf5af364ac5064ef61ec78c33b34762c5fe1b066f05cd2c1b2542847c7f6de1"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.740005 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.740943 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.742272 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" event={"ID":"b74f9275-a7ff-4b5f-a6e1-3adff65c8a71","Type":"ContainerStarted","Data":"88b09e47403112460db1826d61346a4d4d790aa5717d3e9a2aa62c44f7e1029f"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.742656 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.743944 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" event={"ID":"e6482c44-8c91-4931-aceb-b18c7418a6c4","Type":"ContainerStarted","Data":"bdad00838e6f9923ce928eab970299800d034b26c0913bd97220bc2218a83626"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.744374 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.745857 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" event={"ID":"8b00283c-6f66-489b-b929-bbd1a5706b67","Type":"ContainerStarted","Data":"c9d35a0eff2bc342bcc593c2f2de9ba82dbbebdbd564ded0c4a8bc5dd7b60d92"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.746354 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.751367 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" event={"ID":"b3816529-aae3-447c-b497-027d78669856","Type":"ContainerStarted","Data":"8549acce0f3f7ac3816c7d099a7c0925b4ca48b4d24c82f79e5fc2a1b668e145"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.751946 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.753685 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" event={"ID":"ddb47eef-c05a-40c3-8d94-dd9187b61267","Type":"ContainerStarted","Data":"92023929d52584d19b9bd5537dabacb362097f7ff5065ee6040fa1106db62484"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.753714 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" event={"ID":"ddb47eef-c05a-40c3-8d94-dd9187b61267","Type":"ContainerStarted","Data":"4560be0b4a773586424d1807b6c2561621bc553f96309459b1ba3c024149fc84"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.754180 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.755402 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" event={"ID":"dca5895b-8bfa-4060-a60d-79e37d0eefe6","Type":"ContainerStarted","Data":"c65b9a0c173a890405427236a92b0b1e8f8115ad61f8173ad5cb0bac49996264"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.755836 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.756810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" event={"ID":"d92e25ae-9963-4073-9b4e-66f4aafff7a6","Type":"ContainerStarted","Data":"760df4f2cd457863ff77844d39081a68d26717affa53f4e0ea6a6032ebc127ad"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.757908 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" event={"ID":"3bb04651-3f3e-4f0a-8822-11279a338e20","Type":"ContainerStarted","Data":"25b1619466e261b1fd30db2ece3f195145ab49064220e40af37d9a1a37dc375d"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.758441 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.759498 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" event={"ID":"9e6331c7-8b94-4ded-92d0-e9db7bbd45ec","Type":"ContainerStarted","Data":"404f7048d1babab0d0461b7b234dc3afd8ed3f27ba3d1349e39609955cff4569"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.761066 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" event={"ID":"1126de8e-d0ae-4d0d-a7d3-cad73f6cc672","Type":"ContainerStarted","Data":"758a6332b5b70cded724919b659d97b12fdea8691000ca10609c6e343ee3ef25"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.761670 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.763090 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" event={"ID":"277bd37d-6c35-4b57-b7bd-b6bb3f1043fe","Type":"ContainerStarted","Data":"0a9733c6e4b8fafcc070b683c111de4483f166146116fb80c22e4ace0b138ce7"} Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.763719 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.851190 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" podStartSLOduration=9.959109277 podStartE2EDuration="32.851170375s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.062667045 +0000 UTC m=+833.102724800" lastFinishedPulling="2026-02-04 08:55:45.954728123 +0000 UTC m=+855.994785898" observedRunningTime="2026-02-04 08:55:53.849483038 +0000 UTC m=+863.889540793" watchObservedRunningTime="2026-02-04 08:55:53.851170375 +0000 UTC m=+863.891228130" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.855678 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" podStartSLOduration=8.209682787 podStartE2EDuration="32.855665238s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:22.822275983 +0000 UTC m=+832.862333748" lastFinishedPulling="2026-02-04 08:55:47.468258444 +0000 UTC m=+857.508316199" observedRunningTime="2026-02-04 08:55:53.814617764 +0000 UTC m=+863.854675529" watchObservedRunningTime="2026-02-04 08:55:53.855665238 +0000 UTC m=+863.895722983" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.957385 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" podStartSLOduration=3.9080282029999998 podStartE2EDuration="32.957366863s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.833918178 +0000 UTC m=+833.873975933" lastFinishedPulling="2026-02-04 08:55:52.883256838 +0000 UTC m=+862.923314593" observedRunningTime="2026-02-04 08:55:53.890551963 +0000 UTC m=+863.930609718" watchObservedRunningTime="2026-02-04 08:55:53.957366863 +0000 UTC m=+863.997424618" Feb 04 08:55:53 crc kubenswrapper[4644]: I0204 08:55:53.958498 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" podStartSLOduration=3.7051479389999997 podStartE2EDuration="32.958493834s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.71235796 +0000 UTC m=+833.752415715" lastFinishedPulling="2026-02-04 08:55:52.965703855 +0000 UTC m=+863.005761610" observedRunningTime="2026-02-04 08:55:53.94772863 +0000 UTC m=+863.987786385" watchObservedRunningTime="2026-02-04 08:55:53.958493834 +0000 UTC m=+863.998551589" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.032857 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" podStartSLOduration=9.269381869 podStartE2EDuration="33.03283945s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.705358418 +0000 UTC m=+833.745416173" lastFinishedPulling="2026-02-04 08:55:47.468815999 +0000 UTC m=+857.508873754" observedRunningTime="2026-02-04 08:55:53.992564858 +0000 UTC m=+864.032622623" watchObservedRunningTime="2026-02-04 08:55:54.03283945 +0000 UTC m=+864.072897205" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.053751 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" podStartSLOduration=9.149747355 podStartE2EDuration="33.053731112s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.56843723 +0000 UTC m=+833.608494985" lastFinishedPulling="2026-02-04 08:55:47.472420987 +0000 UTC m=+857.512478742" observedRunningTime="2026-02-04 08:55:54.053028593 +0000 UTC m=+864.093086348" watchObservedRunningTime="2026-02-04 08:55:54.053731112 +0000 UTC m=+864.093788867" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.060638 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" podStartSLOduration=9.839706909 podStartE2EDuration="33.060618491s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:22.733814091 +0000 UTC m=+832.773871856" lastFinishedPulling="2026-02-04 08:55:45.954725683 +0000 UTC m=+855.994783438" observedRunningTime="2026-02-04 08:55:54.038031883 +0000 UTC m=+864.078089638" watchObservedRunningTime="2026-02-04 08:55:54.060618491 +0000 UTC m=+864.100676246" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.113956 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" podStartSLOduration=3.970927757 podStartE2EDuration="33.113937971s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.70617044 +0000 UTC m=+833.746228195" lastFinishedPulling="2026-02-04 08:55:52.849180644 +0000 UTC m=+862.889238409" observedRunningTime="2026-02-04 08:55:54.110766164 +0000 UTC m=+864.150823919" watchObservedRunningTime="2026-02-04 08:55:54.113937971 +0000 UTC m=+864.153995726" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.164262 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vgccb" podStartSLOduration=3.04228639 podStartE2EDuration="32.164245269s" podCreationTimestamp="2026-02-04 08:55:22 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.843682805 +0000 UTC m=+833.883740560" lastFinishedPulling="2026-02-04 08:55:52.965641694 +0000 UTC m=+863.005699439" observedRunningTime="2026-02-04 08:55:54.161574296 +0000 UTC m=+864.201632041" watchObservedRunningTime="2026-02-04 08:55:54.164245269 +0000 UTC m=+864.204303024" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.186521 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" podStartSLOduration=4.035894377 podStartE2EDuration="33.186507339s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.708517625 +0000 UTC m=+833.748575370" lastFinishedPulling="2026-02-04 08:55:52.859130567 +0000 UTC m=+862.899188332" observedRunningTime="2026-02-04 08:55:54.183669291 +0000 UTC m=+864.223727046" watchObservedRunningTime="2026-02-04 08:55:54.186507339 +0000 UTC m=+864.226565094" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.223556 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" podStartSLOduration=32.223540643 podStartE2EDuration="32.223540643s" podCreationTimestamp="2026-02-04 08:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:55:54.218881115 +0000 UTC m=+864.258938870" watchObservedRunningTime="2026-02-04 08:55:54.223540643 +0000 UTC m=+864.263598398" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.252651 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" podStartSLOduration=9.488296535 podStartE2EDuration="33.252629069s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.704748282 +0000 UTC m=+833.744806037" lastFinishedPulling="2026-02-04 08:55:47.469080816 +0000 UTC m=+857.509138571" observedRunningTime="2026-02-04 08:55:54.245158435 +0000 UTC m=+864.285216190" watchObservedRunningTime="2026-02-04 08:55:54.252629069 +0000 UTC m=+864.292686824" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.268267 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" podStartSLOduration=4.156572439 podStartE2EDuration="33.268250607s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.833675531 +0000 UTC m=+833.873733286" lastFinishedPulling="2026-02-04 08:55:52.945353689 +0000 UTC m=+862.985411454" observedRunningTime="2026-02-04 08:55:54.264224587 +0000 UTC m=+864.304282342" watchObservedRunningTime="2026-02-04 08:55:54.268250607 +0000 UTC m=+864.308308362" Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.596476 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh"] Feb 04 08:55:54 crc kubenswrapper[4644]: I0204 08:55:54.775028 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" event={"ID":"af50abdc-12fd-4e29-b6ce-804f91e185f5","Type":"ContainerStarted","Data":"43b25e51bcd3a5a95ec2bf17efa3abec8e4e85ca475741cceca1b804c519e132"} Feb 04 08:55:56 crc kubenswrapper[4644]: I0204 08:55:56.802601 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" event={"ID":"08ce9496-06f2-4a40-aac7-eaddbc4eb617","Type":"ContainerStarted","Data":"ba09d66234c99e0b3acd4569ec719043497dc4f876e80009aa77fb5727cf4521"} Feb 04 08:55:56 crc kubenswrapper[4644]: I0204 08:55:56.803374 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:55:56 crc kubenswrapper[4644]: I0204 08:55:56.820509 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" podStartSLOduration=3.7306139270000003 podStartE2EDuration="35.820471832s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.645749507 +0000 UTC m=+833.685807262" lastFinishedPulling="2026-02-04 08:55:55.735607412 +0000 UTC m=+865.775665167" observedRunningTime="2026-02-04 08:55:56.820319468 +0000 UTC m=+866.860377223" watchObservedRunningTime="2026-02-04 08:55:56.820471832 +0000 UTC m=+866.860529587" Feb 04 08:55:58 crc kubenswrapper[4644]: I0204 08:55:58.515782 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69b675f8c4-g2gnp" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.828134 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" event={"ID":"f1aab4ac-082c-4c69-94c8-6291514178b7","Type":"ContainerStarted","Data":"e93f0e453e621442fcdc2df3553fd1b0c972536045b666ee631e0fc3f18bdbe2"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.828903 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.830112 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" event={"ID":"718025b3-0dfa-4c50-a020-8fc030f6061c","Type":"ContainerStarted","Data":"895019e1728cd7984af32993b782d67333817823bb9c561acf53d7b4f5ad0678"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.830281 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.832454 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" event={"ID":"d92e25ae-9963-4073-9b4e-66f4aafff7a6","Type":"ContainerStarted","Data":"659ba7a02f41cbd9ad7cc42bb8121574d72e9e4faf3f33d44baba8950e093a49"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.832793 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.834071 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" event={"ID":"af50abdc-12fd-4e29-b6ce-804f91e185f5","Type":"ContainerStarted","Data":"9bceec57bf6142d21d6f90d170791ae461ca3e7ba4bf193981a1c7407eaffb33"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.834164 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.835103 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" event={"ID":"b449c147-de4b-4503-b680-86e2a43715e2","Type":"ContainerStarted","Data":"f2986e3b6ab4a6c9ec23bcbec05a6f2dce443c6426a1d72de3e0a44066485edf"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.835306 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.836504 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" event={"ID":"0d5154cd-bccf-4112-a9b5-df0cf8375905","Type":"ContainerStarted","Data":"91c701fe452e5cb4d98c7ac85ae437a937a1367e62337f9922228ceba69c1b4c"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.836700 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.837634 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" event={"ID":"65e46d7b-9b3f-447b-91da-35322d406623","Type":"ContainerStarted","Data":"a343795db67f4a9754998fcfa3722486d93d49bbd26cd75e0f1b389ade76a7fd"} Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.837801 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.865039 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" podStartSLOduration=3.597029349 podStartE2EDuration="38.86502233s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.651925186 +0000 UTC m=+833.691982941" lastFinishedPulling="2026-02-04 08:55:58.919918167 +0000 UTC m=+868.959975922" observedRunningTime="2026-02-04 08:55:59.862024968 +0000 UTC m=+869.902082723" watchObservedRunningTime="2026-02-04 08:55:59.86502233 +0000 UTC m=+869.905080085" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.933036 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" podStartSLOduration=33.298277479 podStartE2EDuration="38.933020232s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:53.285167874 +0000 UTC m=+863.325225629" lastFinishedPulling="2026-02-04 08:55:58.919910627 +0000 UTC m=+868.959968382" observedRunningTime="2026-02-04 08:55:59.92710663 +0000 UTC m=+869.967164385" watchObservedRunningTime="2026-02-04 08:55:59.933020232 +0000 UTC m=+869.973077987" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.948243 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" podStartSLOduration=3.685743768 podStartE2EDuration="38.948226349s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.657460857 +0000 UTC m=+833.697518612" lastFinishedPulling="2026-02-04 08:55:58.919943438 +0000 UTC m=+868.960001193" observedRunningTime="2026-02-04 08:55:59.941121684 +0000 UTC m=+869.981179439" watchObservedRunningTime="2026-02-04 08:55:59.948226349 +0000 UTC m=+869.988284104" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.969319 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" podStartSLOduration=3.700806901 podStartE2EDuration="38.969302446s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.6513525 +0000 UTC m=+833.691410255" lastFinishedPulling="2026-02-04 08:55:58.919848035 +0000 UTC m=+868.959905800" observedRunningTime="2026-02-04 08:55:59.967698781 +0000 UTC m=+870.007756536" watchObservedRunningTime="2026-02-04 08:55:59.969302446 +0000 UTC m=+870.009360201" Feb 04 08:55:59 crc kubenswrapper[4644]: I0204 08:55:59.994573 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" podStartSLOduration=3.728419416 podStartE2EDuration="38.994559227s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.653761076 +0000 UTC m=+833.693818831" lastFinishedPulling="2026-02-04 08:55:58.919900887 +0000 UTC m=+868.959958642" observedRunningTime="2026-02-04 08:55:59.991283428 +0000 UTC m=+870.031341183" watchObservedRunningTime="2026-02-04 08:55:59.994559227 +0000 UTC m=+870.034616982" Feb 04 08:56:00 crc kubenswrapper[4644]: I0204 08:56:00.011961 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" podStartSLOduration=2.966452187 podStartE2EDuration="39.011941543s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:22.794989686 +0000 UTC m=+832.835047441" lastFinishedPulling="2026-02-04 08:55:58.840479052 +0000 UTC m=+868.880536797" observedRunningTime="2026-02-04 08:56:00.009541587 +0000 UTC m=+870.049599332" watchObservedRunningTime="2026-02-04 08:56:00.011941543 +0000 UTC m=+870.051999308" Feb 04 08:56:00 crc kubenswrapper[4644]: I0204 08:56:00.039171 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" podStartSLOduration=34.735940281 podStartE2EDuration="39.039149849s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:54.632310207 +0000 UTC m=+864.672367962" lastFinishedPulling="2026-02-04 08:55:58.935519775 +0000 UTC m=+868.975577530" observedRunningTime="2026-02-04 08:56:00.030622395 +0000 UTC m=+870.070680150" watchObservedRunningTime="2026-02-04 08:56:00.039149849 +0000 UTC m=+870.079207604" Feb 04 08:56:00 crc kubenswrapper[4644]: I0204 08:56:00.846810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" event={"ID":"6f482e24-1f12-48bd-8944-93b1e7ee2d76","Type":"ContainerStarted","Data":"0b207e8f4e68829a65cdaa1e4bca3664431ace7e54cf8dffbe2f84ac144a10a5"} Feb 04 08:56:00 crc kubenswrapper[4644]: I0204 08:56:00.848085 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:56:00 crc kubenswrapper[4644]: I0204 08:56:00.862266 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" podStartSLOduration=3.484658381 podStartE2EDuration="39.862253531s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.705850422 +0000 UTC m=+833.745908177" lastFinishedPulling="2026-02-04 08:56:00.083445572 +0000 UTC m=+870.123503327" observedRunningTime="2026-02-04 08:56:00.859107555 +0000 UTC m=+870.899165310" watchObservedRunningTime="2026-02-04 08:56:00.862253531 +0000 UTC m=+870.902311286" Feb 04 08:56:01 crc kubenswrapper[4644]: I0204 08:56:01.635742 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-fhr46" Feb 04 08:56:01 crc kubenswrapper[4644]: I0204 08:56:01.639577 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-stnhl" Feb 04 08:56:01 crc kubenswrapper[4644]: I0204 08:56:01.640387 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hwkc4" Feb 04 08:56:01 crc kubenswrapper[4644]: I0204 08:56:01.977437 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pb5zg" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.115928 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xw5rw" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.125741 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-6mv9v" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.211230 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-7jlm9" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.280785 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-hp2fd" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.331217 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tc45m" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.559539 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4r2z6" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.676001 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9msfm" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.722594 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-8l8s8" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.859534 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" event={"ID":"e9033b55-edfc-440d-bd2c-fa027d27f034","Type":"ContainerStarted","Data":"006f12eb99bd17b435ce2fa1d8bea11286830c04adc0310dcf12e5ddd8142d37"} Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.860120 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:56:02 crc kubenswrapper[4644]: I0204 08:56:02.873759 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" podStartSLOduration=3.466102203 podStartE2EDuration="41.873741207s" podCreationTimestamp="2026-02-04 08:55:21 +0000 UTC" firstStartedPulling="2026-02-04 08:55:23.654894337 +0000 UTC m=+833.694952092" lastFinishedPulling="2026-02-04 08:56:02.062533331 +0000 UTC m=+872.102591096" observedRunningTime="2026-02-04 08:56:02.87347772 +0000 UTC m=+872.913535475" watchObservedRunningTime="2026-02-04 08:56:02.873741207 +0000 UTC m=+872.913798962" Feb 04 08:56:07 crc kubenswrapper[4644]: I0204 08:56:07.904725 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d777fx" Feb 04 08:56:11 crc kubenswrapper[4644]: I0204 08:56:11.597106 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sxbgc" Feb 04 08:56:11 crc kubenswrapper[4644]: I0204 08:56:11.812747 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g9w8f" Feb 04 08:56:12 crc kubenswrapper[4644]: I0204 08:56:12.031169 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-cln6d" Feb 04 08:56:12 crc kubenswrapper[4644]: I0204 08:56:12.037690 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xmsgv" Feb 04 08:56:12 crc kubenswrapper[4644]: I0204 08:56:12.081132 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-t5sv7" Feb 04 08:56:12 crc kubenswrapper[4644]: I0204 08:56:12.139748 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v6q27" Feb 04 08:56:12 crc kubenswrapper[4644]: I0204 08:56:12.187940 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9n6pj" Feb 04 08:56:13 crc kubenswrapper[4644]: I0204 08:56:13.747865 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6ldzh" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.517689 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.519567 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.522981 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.522981 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fm952" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.523420 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.523647 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.537842 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.606950 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.607016 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8drv\" (UniqueName: \"kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.619981 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.621245 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.624375 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.645943 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.708473 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.708985 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.709070 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8drv\" (UniqueName: \"kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.709171 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.709244 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdr8p\" (UniqueName: \"kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.710139 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.758656 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8drv\" (UniqueName: \"kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv\") pod \"dnsmasq-dns-675f4bcbfc-lmv2v\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.810137 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.810194 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdr8p\" (UniqueName: \"kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.810282 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.811596 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.811825 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.845679 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.846860 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdr8p\" (UniqueName: \"kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p\") pod \"dnsmasq-dns-78dd6ddcc-ltgwb\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:35 crc kubenswrapper[4644]: I0204 08:56:35.944106 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:56:36 crc kubenswrapper[4644]: I0204 08:56:36.472238 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:56:36 crc kubenswrapper[4644]: I0204 08:56:36.540191 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:56:36 crc kubenswrapper[4644]: W0204 08:56:36.552757 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2d4d83_ac2e_444b_838e_bb35f02573f5.slice/crio-45275711e3e90bfb944f5b41b9318e3a2818e0ca6d5b6249e06d1b942077651c WatchSource:0}: Error finding container 45275711e3e90bfb944f5b41b9318e3a2818e0ca6d5b6249e06d1b942077651c: Status 404 returned error can't find the container with id 45275711e3e90bfb944f5b41b9318e3a2818e0ca6d5b6249e06d1b942077651c Feb 04 08:56:37 crc kubenswrapper[4644]: I0204 08:56:37.141543 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" event={"ID":"cf2d4d83-ac2e-444b-838e-bb35f02573f5","Type":"ContainerStarted","Data":"45275711e3e90bfb944f5b41b9318e3a2818e0ca6d5b6249e06d1b942077651c"} Feb 04 08:56:37 crc kubenswrapper[4644]: I0204 08:56:37.143388 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" event={"ID":"d78ae621-a78a-4517-b4bb-dd43c98cd493","Type":"ContainerStarted","Data":"d86b518bea221dc0a2aa6473d771603ec68c567e60449e9c1d33f7639f9908e9"} Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.277457 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.315195 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.316878 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.329602 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.457866 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.457964 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrrv\" (UniqueName: \"kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.457997 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.559446 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.559506 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrrv\" (UniqueName: \"kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.559540 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.560456 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.560454 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.610432 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrrv\" (UniqueName: \"kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv\") pod \"dnsmasq-dns-666b6646f7-ct7pm\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.655212 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.682225 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.704010 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.705529 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.730595 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.866978 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.867048 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnvh\" (UniqueName: \"kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.867107 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.969025 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.969080 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnvh\" (UniqueName: \"kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.969106 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.970043 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:38 crc kubenswrapper[4644]: I0204 08:56:38.970573 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.001462 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnvh\" (UniqueName: \"kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh\") pod \"dnsmasq-dns-57d769cc4f-2t89k\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.069760 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.283976 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:56:39 crc kubenswrapper[4644]: W0204 08:56:39.318962 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01ad9ec9_e651_467e_a602_98a3ede6b550.slice/crio-71248621349e8b3fc051052871a1099592e87c7681ee932860a27aa3530773e7 WatchSource:0}: Error finding container 71248621349e8b3fc051052871a1099592e87c7681ee932860a27aa3530773e7: Status 404 returned error can't find the container with id 71248621349e8b3fc051052871a1099592e87c7681ee932860a27aa3530773e7 Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.495033 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.496490 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499077 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tgb8m" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499228 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499377 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499474 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499499 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.499487 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.500720 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.505307 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.554999 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:56:39 crc kubenswrapper[4644]: W0204 08:56:39.562369 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36865c3a_9c0b_4853_a08d_c56af12dd154.slice/crio-2a2850a08786915b6125ab5995fd59945732d95aa38fd1cd1a95b8a8e8e821e5 WatchSource:0}: Error finding container 2a2850a08786915b6125ab5995fd59945732d95aa38fd1cd1a95b8a8e8e821e5: Status 404 returned error can't find the container with id 2a2850a08786915b6125ab5995fd59945732d95aa38fd1cd1a95b8a8e8e821e5 Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.570493 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588179 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588254 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588293 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588366 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588395 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588438 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588468 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrt9q\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588503 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588543 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588563 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.588589 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691033 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691088 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691126 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691152 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt9q\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691893 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691944 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691971 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.691997 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692045 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692098 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692127 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692421 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.692746 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.693169 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.693212 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.700895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.703379 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.709392 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.709921 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt9q\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.712541 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.722774 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.748159 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.821202 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.832769 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.843275 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.844612 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.845037 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.846641 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.846899 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.847086 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.847310 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.847569 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h8zm5" Feb 04 08:56:39 crc kubenswrapper[4644]: I0204 08:56:39.853169 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003260 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003359 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003388 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003422 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ph5j\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003467 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003499 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003520 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003549 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003776 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003845 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.003863 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105010 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105057 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105074 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105093 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105127 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105143 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105176 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ph5j\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105195 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105217 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105233 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105254 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.105383 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.106696 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.107534 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.107776 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.108161 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.108888 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.113604 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.116288 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.117391 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.124538 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ph5j\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.129441 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.154458 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.162856 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.195029 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" event={"ID":"36865c3a-9c0b-4853-a08d-c56af12dd154","Type":"ContainerStarted","Data":"2a2850a08786915b6125ab5995fd59945732d95aa38fd1cd1a95b8a8e8e821e5"} Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.196012 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" event={"ID":"01ad9ec9-e651-467e-a602-98a3ede6b550","Type":"ContainerStarted","Data":"71248621349e8b3fc051052871a1099592e87c7681ee932860a27aa3530773e7"} Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.196993 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.648042 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.963354 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.964504 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.972569 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.975438 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-w7266" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.977593 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.977860 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 04 08:56:40 crc kubenswrapper[4644]: I0204 08:56:40.987562 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.001419 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119511 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119575 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119601 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119630 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119662 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119697 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119717 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7xl\" (UniqueName: \"kubernetes.io/projected/9bf50d46-1c85-4db8-9887-f30f832212c1-kube-api-access-jl7xl\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.119766 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.221726 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.221850 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7xl\" (UniqueName: \"kubernetes.io/projected/9bf50d46-1c85-4db8-9887-f30f832212c1-kube-api-access-jl7xl\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.221922 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.221951 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.222000 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.222026 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.222059 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.222086 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.222439 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.223519 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.223759 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.223998 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.224208 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bf50d46-1c85-4db8-9887-f30f832212c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.239470 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.240599 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerStarted","Data":"327a530cc5033a4f62d47a8548278095c453566f0986737c95124b2da8826a33"} Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.252176 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf50d46-1c85-4db8-9887-f30f832212c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.255475 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerStarted","Data":"c7c35b2caf433f394fc693567e18422bb37067e89a1d3da7b7b5463036ea5f89"} Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.263314 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7xl\" (UniqueName: \"kubernetes.io/projected/9bf50d46-1c85-4db8-9887-f30f832212c1-kube-api-access-jl7xl\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.267455 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9bf50d46-1c85-4db8-9887-f30f832212c1\") " pod="openstack/openstack-galera-0" Feb 04 08:56:41 crc kubenswrapper[4644]: I0204 08:56:41.311815 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.034491 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 04 08:56:42 crc kubenswrapper[4644]: W0204 08:56:42.144672 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf50d46_1c85_4db8_9887_f30f832212c1.slice/crio-e003b7a3a98455a7a06004ed55115f7d8c922e800beca3680d7d7a849ae0c894 WatchSource:0}: Error finding container e003b7a3a98455a7a06004ed55115f7d8c922e800beca3680d7d7a849ae0c894: Status 404 returned error can't find the container with id e003b7a3a98455a7a06004ed55115f7d8c922e800beca3680d7d7a849ae0c894 Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.331695 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9bf50d46-1c85-4db8-9887-f30f832212c1","Type":"ContainerStarted","Data":"e003b7a3a98455a7a06004ed55115f7d8c922e800beca3680d7d7a849ae0c894"} Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.439664 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.446406 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.448743 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.449080 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.449906 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.450609 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-w492j" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.450821 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560131 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560210 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560253 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6rz\" (UniqueName: \"kubernetes.io/projected/4536ebcc-8962-4cf4-9cae-5db170118156-kube-api-access-sk6rz\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560301 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560347 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560371 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.560399 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.664171 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.664241 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6rz\" (UniqueName: \"kubernetes.io/projected/4536ebcc-8962-4cf4-9cae-5db170118156-kube-api-access-sk6rz\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.664734 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.664989 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.665057 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.665295 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.665426 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.665493 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.668629 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.669998 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.671241 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.671381 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.673078 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4536ebcc-8962-4cf4-9cae-5db170118156-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.676159 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.680061 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4536ebcc-8962-4cf4-9cae-5db170118156-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.691059 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6rz\" (UniqueName: \"kubernetes.io/projected/4536ebcc-8962-4cf4-9cae-5db170118156-kube-api-access-sk6rz\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.714026 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4536ebcc-8962-4cf4-9cae-5db170118156\") " pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.791837 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.864491 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.865714 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.869541 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.869710 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.874391 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tp2bq" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.879137 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.974286 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kolla-config\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.974345 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.974376 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsvr\" (UniqueName: \"kubernetes.io/projected/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kube-api-access-9dsvr\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.974405 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:42 crc kubenswrapper[4644]: I0204 08:56:42.974460 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-config-data\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.075569 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.075630 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsvr\" (UniqueName: \"kubernetes.io/projected/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kube-api-access-9dsvr\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.075660 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.075722 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-config-data\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.075772 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kolla-config\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.079936 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kolla-config\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.080220 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c568fc8f-9f0b-496b-b39e-51ef99241e6e-config-data\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.086671 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.094275 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c568fc8f-9f0b-496b-b39e-51ef99241e6e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.110521 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsvr\" (UniqueName: \"kubernetes.io/projected/c568fc8f-9f0b-496b-b39e-51ef99241e6e-kube-api-access-9dsvr\") pod \"memcached-0\" (UID: \"c568fc8f-9f0b-496b-b39e-51ef99241e6e\") " pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.254545 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.552011 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 08:56:43 crc kubenswrapper[4644]: I0204 08:56:43.804871 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.379436 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c568fc8f-9f0b-496b-b39e-51ef99241e6e","Type":"ContainerStarted","Data":"8e8a0d18e402c9649884db510f8a12668b9ec87c6a50ffa58ff49f1657b4c613"} Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.395589 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4536ebcc-8962-4cf4-9cae-5db170118156","Type":"ContainerStarted","Data":"17d1b37e770f3534ada81deabdb86b9b321c2bbcafc2565f924a23ba27eaf287"} Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.717175 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.723489 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.723807 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.735742 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qd5lc" Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.815647 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmssd\" (UniqueName: \"kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd\") pod \"kube-state-metrics-0\" (UID: \"42467ee2-0414-443d-96c2-61b4118dd8d6\") " pod="openstack/kube-state-metrics-0" Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.917811 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmssd\" (UniqueName: \"kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd\") pod \"kube-state-metrics-0\" (UID: \"42467ee2-0414-443d-96c2-61b4118dd8d6\") " pod="openstack/kube-state-metrics-0" Feb 04 08:56:44 crc kubenswrapper[4644]: I0204 08:56:44.945052 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmssd\" (UniqueName: \"kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd\") pod \"kube-state-metrics-0\" (UID: \"42467ee2-0414-443d-96c2-61b4118dd8d6\") " pod="openstack/kube-state-metrics-0" Feb 04 08:56:45 crc kubenswrapper[4644]: I0204 08:56:45.092154 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.151609 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8nfv7"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.152663 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.160657 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8nfv7"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.161000 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-htgxz" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.161093 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.161446 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.181969 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zzhbv"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.183523 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.201343 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zzhbv"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.287875 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-log-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.287922 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7v4\" (UniqueName: \"kubernetes.io/projected/964cdd6e-b29a-401d-9bb0-3375b663a899-kube-api-access-ll7v4\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288035 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtr8\" (UniqueName: \"kubernetes.io/projected/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-kube-api-access-7xtr8\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288063 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-lib\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288080 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-scripts\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288122 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/964cdd6e-b29a-401d-9bb0-3375b663a899-scripts\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288159 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288235 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-etc-ovs\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288284 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-combined-ca-bundle\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288497 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-ovn-controller-tls-certs\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288538 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-log\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288600 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.288643 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-run\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.358655 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.367500 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.376008 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.377149 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8w4pp" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.377219 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.377405 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.377948 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390802 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-log-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390850 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7v4\" (UniqueName: \"kubernetes.io/projected/964cdd6e-b29a-401d-9bb0-3375b663a899-kube-api-access-ll7v4\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390883 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xtr8\" (UniqueName: \"kubernetes.io/projected/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-kube-api-access-7xtr8\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390907 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-lib\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390930 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-scripts\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390952 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/964cdd6e-b29a-401d-9bb0-3375b663a899-scripts\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.390980 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391003 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-etc-ovs\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391028 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-combined-ca-bundle\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391057 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-ovn-controller-tls-certs\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391101 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-log\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391132 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.391166 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-run\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.392599 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-run\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.393372 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-log\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.393411 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.393634 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-log-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.394545 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-var-lib\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.396054 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/964cdd6e-b29a-401d-9bb0-3375b663a899-var-run-ovn\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.396243 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-etc-ovs\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.402499 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/964cdd6e-b29a-401d-9bb0-3375b663a899-scripts\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.402687 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-scripts\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.419511 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.420819 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7v4\" (UniqueName: \"kubernetes.io/projected/964cdd6e-b29a-401d-9bb0-3375b663a899-kube-api-access-ll7v4\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.421297 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-ovn-controller-tls-certs\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.427302 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cdd6e-b29a-401d-9bb0-3375b663a899-combined-ca-bundle\") pod \"ovn-controller-8nfv7\" (UID: \"964cdd6e-b29a-401d-9bb0-3375b663a899\") " pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.436249 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xtr8\" (UniqueName: \"kubernetes.io/projected/a5cee1f7-2917-47fe-95ac-96b0d9c502b7-kube-api-access-7xtr8\") pod \"ovn-controller-ovs-zzhbv\" (UID: \"a5cee1f7-2917-47fe-95ac-96b0d9c502b7\") " pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497074 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrh8x\" (UniqueName: \"kubernetes.io/projected/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-kube-api-access-hrh8x\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497129 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497156 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497179 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497265 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497288 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.497308 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.499183 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.533888 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.545521 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.600889 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.600972 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601006 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601047 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601084 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh8x\" (UniqueName: \"kubernetes.io/projected/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-kube-api-access-hrh8x\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601102 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601120 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601135 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.601588 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.602684 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.602938 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.604206 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.605605 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.608761 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.609247 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.626311 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.626837 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh8x\" (UniqueName: \"kubernetes.io/projected/8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f-kube-api-access-hrh8x\") pod \"ovsdbserver-nb-0\" (UID: \"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f\") " pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:48 crc kubenswrapper[4644]: I0204 08:56:48.703437 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.076912 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.079607 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.085281 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.085748 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.085901 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qllkz" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.086012 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.098625 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.157669 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5q5\" (UniqueName: \"kubernetes.io/projected/73360e1e-70eb-499b-b3a1-cd9bde6ac466-kube-api-access-zq5q5\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.157995 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158020 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158057 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-config\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158080 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158392 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158466 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.158490 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260456 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260512 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260536 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260580 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5q5\" (UniqueName: \"kubernetes.io/projected/73360e1e-70eb-499b-b3a1-cd9bde6ac466-kube-api-access-zq5q5\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260603 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260633 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260669 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-config\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.260691 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.261196 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.262076 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.262770 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.264036 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73360e1e-70eb-499b-b3a1-cd9bde6ac466-config\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.271095 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.276170 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.279502 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5q5\" (UniqueName: \"kubernetes.io/projected/73360e1e-70eb-499b-b3a1-cd9bde6ac466-kube-api-access-zq5q5\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.286647 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73360e1e-70eb-499b-b3a1-cd9bde6ac466-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.297899 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"73360e1e-70eb-499b-b3a1-cd9bde6ac466\") " pod="openstack/ovsdbserver-sb-0" Feb 04 08:56:52 crc kubenswrapper[4644]: I0204 08:56:52.415825 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 04 08:57:02 crc kubenswrapper[4644]: E0204 08:57:02.336533 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 04 08:57:02 crc kubenswrapper[4644]: E0204 08:57:02.337108 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl7xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(9bf50d46-1c85-4db8-9887-f30f832212c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:02 crc kubenswrapper[4644]: E0204 08:57:02.339949 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="9bf50d46-1c85-4db8-9887-f30f832212c1" Feb 04 08:57:02 crc kubenswrapper[4644]: E0204 08:57:02.574151 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="9bf50d46-1c85-4db8-9887-f30f832212c1" Feb 04 08:57:05 crc kubenswrapper[4644]: I0204 08:57:05.555236 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:57:05 crc kubenswrapper[4644]: I0204 08:57:05.555736 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:57:08 crc kubenswrapper[4644]: E0204 08:57:08.706507 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 04 08:57:08 crc kubenswrapper[4644]: E0204 08:57:08.707010 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n7bh5fbh5c9h674h89h647h4h68h678h8dh87h564h584hbhd9h675hf6h8bhd6h68fhc6h5dh6h95h98h546h595h67dh65hd7h689h5cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dsvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(c568fc8f-9f0b-496b-b39e-51ef99241e6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:08 crc kubenswrapper[4644]: E0204 08:57:08.708490 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="c568fc8f-9f0b-496b-b39e-51ef99241e6e" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.638945 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="c568fc8f-9f0b-496b-b39e-51ef99241e6e" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.641189 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.641492 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srrrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ct7pm_openstack(01ad9ec9-e651-467e-a602-98a3ede6b550): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.642903 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.718922 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.719100 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdr8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ltgwb_openstack(d78ae621-a78a-4517-b4bb-dd43c98cd493): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.720466 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" podUID="d78ae621-a78a-4517-b4bb-dd43c98cd493" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.737848 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.738030 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8drv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lmv2v_openstack(cf2d4d83-ac2e-444b-838e-bb35f02573f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.739368 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" podUID="cf2d4d83-ac2e-444b-838e-bb35f02573f5" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.768916 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.769096 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jnvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2t89k_openstack(36865c3a-9c0b-4853-a08d-c56af12dd154): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:57:09 crc kubenswrapper[4644]: E0204 08:57:09.770447 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" Feb 04 08:57:10 crc kubenswrapper[4644]: I0204 08:57:10.447697 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 08:57:10 crc kubenswrapper[4644]: I0204 08:57:10.453222 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8nfv7"] Feb 04 08:57:10 crc kubenswrapper[4644]: W0204 08:57:10.479513 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42467ee2_0414_443d_96c2_61b4118dd8d6.slice/crio-2e62b812c4d64c6f11006cdbeba2cb9c6d74dda8aaccbc1596bff5bcdab2fdeb WatchSource:0}: Error finding container 2e62b812c4d64c6f11006cdbeba2cb9c6d74dda8aaccbc1596bff5bcdab2fdeb: Status 404 returned error can't find the container with id 2e62b812c4d64c6f11006cdbeba2cb9c6d74dda8aaccbc1596bff5bcdab2fdeb Feb 04 08:57:10 crc kubenswrapper[4644]: I0204 08:57:10.643557 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42467ee2-0414-443d-96c2-61b4118dd8d6","Type":"ContainerStarted","Data":"2e62b812c4d64c6f11006cdbeba2cb9c6d74dda8aaccbc1596bff5bcdab2fdeb"} Feb 04 08:57:10 crc kubenswrapper[4644]: I0204 08:57:10.645178 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7" event={"ID":"964cdd6e-b29a-401d-9bb0-3375b663a899","Type":"ContainerStarted","Data":"6e07c094e82e8bb78b1e6dcbb8e650b643c9ce90e4fb6e3a725215feddca0da9"} Feb 04 08:57:10 crc kubenswrapper[4644]: I0204 08:57:10.647809 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4536ebcc-8962-4cf4-9cae-5db170118156","Type":"ContainerStarted","Data":"aa6048d4c46a85ee95c95b5bc5e1389027c913fd9a69ca251699cead8eb3abf0"} Feb 04 08:57:10 crc kubenswrapper[4644]: E0204 08:57:10.653207 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" Feb 04 08:57:10 crc kubenswrapper[4644]: E0204 08:57:10.653548 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.022484 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.240265 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.246165 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.368128 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc\") pod \"d78ae621-a78a-4517-b4bb-dd43c98cd493\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.368203 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config\") pod \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.368245 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8drv\" (UniqueName: \"kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv\") pod \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\" (UID: \"cf2d4d83-ac2e-444b-838e-bb35f02573f5\") " Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.368267 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdr8p\" (UniqueName: \"kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p\") pod \"d78ae621-a78a-4517-b4bb-dd43c98cd493\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.368289 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config\") pod \"d78ae621-a78a-4517-b4bb-dd43c98cd493\" (UID: \"d78ae621-a78a-4517-b4bb-dd43c98cd493\") " Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.369163 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config" (OuterVolumeSpecName: "config") pod "d78ae621-a78a-4517-b4bb-dd43c98cd493" (UID: "d78ae621-a78a-4517-b4bb-dd43c98cd493"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.369544 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d78ae621-a78a-4517-b4bb-dd43c98cd493" (UID: "d78ae621-a78a-4517-b4bb-dd43c98cd493"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.369933 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config" (OuterVolumeSpecName: "config") pod "cf2d4d83-ac2e-444b-838e-bb35f02573f5" (UID: "cf2d4d83-ac2e-444b-838e-bb35f02573f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.376159 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv" (OuterVolumeSpecName: "kube-api-access-n8drv") pod "cf2d4d83-ac2e-444b-838e-bb35f02573f5" (UID: "cf2d4d83-ac2e-444b-838e-bb35f02573f5"). InnerVolumeSpecName "kube-api-access-n8drv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.376973 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p" (OuterVolumeSpecName: "kube-api-access-bdr8p") pod "d78ae621-a78a-4517-b4bb-dd43c98cd493" (UID: "d78ae621-a78a-4517-b4bb-dd43c98cd493"). InnerVolumeSpecName "kube-api-access-bdr8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.469714 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.469756 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2d4d83-ac2e-444b-838e-bb35f02573f5-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.469770 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8drv\" (UniqueName: \"kubernetes.io/projected/cf2d4d83-ac2e-444b-838e-bb35f02573f5-kube-api-access-n8drv\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.469786 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdr8p\" (UniqueName: \"kubernetes.io/projected/d78ae621-a78a-4517-b4bb-dd43c98cd493-kube-api-access-bdr8p\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.469799 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78ae621-a78a-4517-b4bb-dd43c98cd493-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.631032 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zzhbv"] Feb 04 08:57:11 crc kubenswrapper[4644]: W0204 08:57:11.644607 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5cee1f7_2917_47fe_95ac_96b0d9c502b7.slice/crio-277776d716762022f24fce4aa8f026af964a8e917b5cfe95d36a2531b0284dc8 WatchSource:0}: Error finding container 277776d716762022f24fce4aa8f026af964a8e917b5cfe95d36a2531b0284dc8: Status 404 returned error can't find the container with id 277776d716762022f24fce4aa8f026af964a8e917b5cfe95d36a2531b0284dc8 Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.656527 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" event={"ID":"d78ae621-a78a-4517-b4bb-dd43c98cd493","Type":"ContainerDied","Data":"d86b518bea221dc0a2aa6473d771603ec68c567e60449e9c1d33f7639f9908e9"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.656559 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltgwb" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.664824 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73360e1e-70eb-499b-b3a1-cd9bde6ac466","Type":"ContainerStarted","Data":"18504ef21ce3c7b8e7d3729e4d3ebf6df266ae784584a41928c550470e5bbd41"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.666778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerStarted","Data":"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.670814 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerStarted","Data":"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.672407 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzhbv" event={"ID":"a5cee1f7-2917-47fe-95ac-96b0d9c502b7","Type":"ContainerStarted","Data":"277776d716762022f24fce4aa8f026af964a8e917b5cfe95d36a2531b0284dc8"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.673584 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.673581 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lmv2v" event={"ID":"cf2d4d83-ac2e-444b-838e-bb35f02573f5","Type":"ContainerDied","Data":"45275711e3e90bfb944f5b41b9318e3a2818e0ca6d5b6249e06d1b942077651c"} Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.799100 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.807769 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltgwb"] Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.828913 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.835033 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lmv2v"] Feb 04 08:57:11 crc kubenswrapper[4644]: I0204 08:57:11.894594 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 08:57:11 crc kubenswrapper[4644]: W0204 08:57:11.907835 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3a6583_1f9f_45ab_a36b_aa5e9ac77c1f.slice/crio-642359a9d8ca9ae70922129d783716e060c223bcfd7655f5291312de22d40907 WatchSource:0}: Error finding container 642359a9d8ca9ae70922129d783716e060c223bcfd7655f5291312de22d40907: Status 404 returned error can't find the container with id 642359a9d8ca9ae70922129d783716e060c223bcfd7655f5291312de22d40907 Feb 04 08:57:12 crc kubenswrapper[4644]: I0204 08:57:12.672214 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2d4d83-ac2e-444b-838e-bb35f02573f5" path="/var/lib/kubelet/pods/cf2d4d83-ac2e-444b-838e-bb35f02573f5/volumes" Feb 04 08:57:12 crc kubenswrapper[4644]: I0204 08:57:12.673031 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78ae621-a78a-4517-b4bb-dd43c98cd493" path="/var/lib/kubelet/pods/d78ae621-a78a-4517-b4bb-dd43c98cd493/volumes" Feb 04 08:57:12 crc kubenswrapper[4644]: I0204 08:57:12.688363 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f","Type":"ContainerStarted","Data":"642359a9d8ca9ae70922129d783716e060c223bcfd7655f5291312de22d40907"} Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.724679 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9bf50d46-1c85-4db8-9887-f30f832212c1","Type":"ContainerStarted","Data":"3d33fb84d14b2817f9c737e3bb436e8cab2000b1b1978863b118967dbca767df"} Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.729421 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73360e1e-70eb-499b-b3a1-cd9bde6ac466","Type":"ContainerStarted","Data":"f4147d4bf30431d09007024d4df35c04a21b5998c6b10e64efe67e13d4d36ea1"} Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.732441 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7" event={"ID":"964cdd6e-b29a-401d-9bb0-3375b663a899","Type":"ContainerStarted","Data":"9485c04bef0150696d7f32d4e6d6bf279ff2f8e82bc95664c898015915de4e2d"} Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.732884 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8nfv7" Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.738719 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzhbv" event={"ID":"a5cee1f7-2917-47fe-95ac-96b0d9c502b7","Type":"ContainerStarted","Data":"71b0ce8a882eca489847152433cb01e1b02c86202de9f7fc5fde8d6e21f3a11b"} Feb 04 08:57:16 crc kubenswrapper[4644]: I0204 08:57:16.805679 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8nfv7" podStartSLOduration=23.531674539 podStartE2EDuration="28.805658123s" podCreationTimestamp="2026-02-04 08:56:48 +0000 UTC" firstStartedPulling="2026-02-04 08:57:10.494269479 +0000 UTC m=+940.534327234" lastFinishedPulling="2026-02-04 08:57:15.768253063 +0000 UTC m=+945.808310818" observedRunningTime="2026-02-04 08:57:16.776084572 +0000 UTC m=+946.816142337" watchObservedRunningTime="2026-02-04 08:57:16.805658123 +0000 UTC m=+946.845715868" Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.751990 4644 generic.go:334] "Generic (PLEG): container finished" podID="a5cee1f7-2917-47fe-95ac-96b0d9c502b7" containerID="71b0ce8a882eca489847152433cb01e1b02c86202de9f7fc5fde8d6e21f3a11b" exitCode=0 Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.752095 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzhbv" event={"ID":"a5cee1f7-2917-47fe-95ac-96b0d9c502b7","Type":"ContainerDied","Data":"71b0ce8a882eca489847152433cb01e1b02c86202de9f7fc5fde8d6e21f3a11b"} Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.752598 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzhbv" event={"ID":"a5cee1f7-2917-47fe-95ac-96b0d9c502b7","Type":"ContainerStarted","Data":"dd7160ca2ffdd8754b4a9f92f7b1c53002d26a9d5f042bd76aeb6f563a4e3fa6"} Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.759081 4644 generic.go:334] "Generic (PLEG): container finished" podID="4536ebcc-8962-4cf4-9cae-5db170118156" containerID="aa6048d4c46a85ee95c95b5bc5e1389027c913fd9a69ca251699cead8eb3abf0" exitCode=0 Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.759358 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4536ebcc-8962-4cf4-9cae-5db170118156","Type":"ContainerDied","Data":"aa6048d4c46a85ee95c95b5bc5e1389027c913fd9a69ca251699cead8eb3abf0"} Feb 04 08:57:17 crc kubenswrapper[4644]: I0204 08:57:17.767906 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f","Type":"ContainerStarted","Data":"c0539e449b80408b569ad8fdb268b192ae6dced874eb23a354bcd88800e17d88"} Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.776111 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzhbv" event={"ID":"a5cee1f7-2917-47fe-95ac-96b0d9c502b7","Type":"ContainerStarted","Data":"78b4f863fc48d5f4a62ea6fe2d30e4b734cd34d9195984f2db33831422d1ad69"} Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.776554 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.776604 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.778588 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4536ebcc-8962-4cf4-9cae-5db170118156","Type":"ContainerStarted","Data":"515710552d9741e162b13a163172102b24ca5dfaad0d5345aeec294b3c438c4a"} Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.825859 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zzhbv" podStartSLOduration=26.721518827 podStartE2EDuration="30.825838137s" podCreationTimestamp="2026-02-04 08:56:48 +0000 UTC" firstStartedPulling="2026-02-04 08:57:11.647641706 +0000 UTC m=+941.687699471" lastFinishedPulling="2026-02-04 08:57:15.751961026 +0000 UTC m=+945.792018781" observedRunningTime="2026-02-04 08:57:18.800773541 +0000 UTC m=+948.840831306" watchObservedRunningTime="2026-02-04 08:57:18.825838137 +0000 UTC m=+948.865895892" Feb 04 08:57:18 crc kubenswrapper[4644]: I0204 08:57:18.829565 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.586098886 podStartE2EDuration="37.829549309s" podCreationTimestamp="2026-02-04 08:56:41 +0000 UTC" firstStartedPulling="2026-02-04 08:56:43.58456514 +0000 UTC m=+913.624622895" lastFinishedPulling="2026-02-04 08:57:09.828015563 +0000 UTC m=+939.868073318" observedRunningTime="2026-02-04 08:57:18.827778871 +0000 UTC m=+948.867836626" watchObservedRunningTime="2026-02-04 08:57:18.829549309 +0000 UTC m=+948.869607074" Feb 04 08:57:22 crc kubenswrapper[4644]: I0204 08:57:22.792550 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 04 08:57:22 crc kubenswrapper[4644]: I0204 08:57:22.792983 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 04 08:57:25 crc kubenswrapper[4644]: I0204 08:57:25.826276 4644 generic.go:334] "Generic (PLEG): container finished" podID="9bf50d46-1c85-4db8-9887-f30f832212c1" containerID="3d33fb84d14b2817f9c737e3bb436e8cab2000b1b1978863b118967dbca767df" exitCode=0 Feb 04 08:57:25 crc kubenswrapper[4644]: I0204 08:57:25.826361 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9bf50d46-1c85-4db8-9887-f30f832212c1","Type":"ContainerDied","Data":"3d33fb84d14b2817f9c737e3bb436e8cab2000b1b1978863b118967dbca767df"} Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.473493 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mb84g"] Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.480807 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.482531 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.493354 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mb84g"] Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.510935 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovs-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.511257 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-config\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.511405 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovn-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.511540 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-combined-ca-bundle\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.511646 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv88p\" (UniqueName: \"kubernetes.io/projected/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-kube-api-access-hv88p\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.511790 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612608 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612663 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovs-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612704 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-config\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612733 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovn-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612760 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-combined-ca-bundle\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.612777 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv88p\" (UniqueName: \"kubernetes.io/projected/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-kube-api-access-hv88p\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.613027 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovs-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.613069 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-ovn-rundir\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.613704 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-config\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.619899 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-combined-ca-bundle\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.626722 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.644886 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv88p\" (UniqueName: \"kubernetes.io/projected/b4cecbc7-4505-46d1-8ddb-4b454e614fb1-kube-api-access-hv88p\") pod \"ovn-controller-metrics-mb84g\" (UID: \"b4cecbc7-4505-46d1-8ddb-4b454e614fb1\") " pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.800379 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mb84g" Feb 04 08:57:34 crc kubenswrapper[4644]: I0204 08:57:34.984101 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.054016 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.055153 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.057169 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.076651 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.124269 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.124309 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnsp\" (UniqueName: \"kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.124342 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.124500 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.225470 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.226307 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnsp\" (UniqueName: \"kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.226254 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.226391 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.226708 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.227207 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.227300 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.238782 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.255833 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnsp\" (UniqueName: \"kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp\") pod \"dnsmasq-dns-5bf47b49b7-f8tfw\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.313212 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.314402 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.318647 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.327869 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.327916 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.327952 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbnc\" (UniqueName: \"kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.328098 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.328139 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.329233 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.373834 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.428697 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.428757 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbnc\" (UniqueName: \"kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.428840 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.428865 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.428901 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.429967 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.430002 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.430143 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.430238 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.450551 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbnc\" (UniqueName: \"kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc\") pod \"dnsmasq-dns-8554648995-95k6h\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.555200 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.556227 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:57:35 crc kubenswrapper[4644]: I0204 08:57:35.637044 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.654660 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.658081 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.675523 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.808413 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.808751 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.808884 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kws5s\" (UniqueName: \"kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.910576 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.910619 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.910644 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kws5s\" (UniqueName: \"kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.911656 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.911901 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.952353 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kws5s\" (UniqueName: \"kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s\") pod \"community-operators-vrbhk\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:40 crc kubenswrapper[4644]: I0204 08:57:40.992414 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:42 crc kubenswrapper[4644]: I0204 08:57:42.204547 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 04 08:57:42 crc kubenswrapper[4644]: I0204 08:57:42.287241 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4536ebcc-8962-4cf4-9cae-5db170118156" containerName="galera" probeResult="failure" output=< Feb 04 08:57:42 crc kubenswrapper[4644]: wsrep_local_state_comment (Joined) differs from Synced Feb 04 08:57:42 crc kubenswrapper[4644]: > Feb 04 08:57:42 crc kubenswrapper[4644]: I0204 08:57:42.965374 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 04 08:57:42 crc kubenswrapper[4644]: I0204 08:57:42.968585 4644 generic.go:334] "Generic (PLEG): container finished" podID="cfd19433-aab2-4d07-99e5-edee81956813" containerID="55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a" exitCode=0 Feb 04 08:57:42 crc kubenswrapper[4644]: I0204 08:57:42.968680 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerDied","Data":"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.007454 4644 generic.go:334] "Generic (PLEG): container finished" podID="36865c3a-9c0b-4853-a08d-c56af12dd154" containerID="e4345d8965cda3ed7b11ade6ab1dffd463c74eaaede2db68546025ba61bcb639" exitCode=0 Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.007515 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" event={"ID":"36865c3a-9c0b-4853-a08d-c56af12dd154","Type":"ContainerDied","Data":"e4345d8965cda3ed7b11ade6ab1dffd463c74eaaede2db68546025ba61bcb639"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.016267 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" containerName="init" containerID="cri-o://8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd" gracePeriod=10 Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.040358 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.041437 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9bf50d46-1c85-4db8-9887-f30f832212c1","Type":"ContainerStarted","Data":"5b370c3d546d8262a7b038220facfc6113b9a242f92cd0fb5c7a83da53f90fdd"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.074724 4644 generic.go:334] "Generic (PLEG): container finished" podID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerID="476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407" exitCode=0 Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.074855 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerDied","Data":"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.136153 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c568fc8f-9f0b-496b-b39e-51ef99241e6e","Type":"ContainerStarted","Data":"076d926e156e6fab16a98e575b60d3186f327416a5fa259596a868669b35e27f"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.137315 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 04 08:57:43 crc kubenswrapper[4644]: W0204 08:57:43.178607 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4cecbc7_4505_46d1_8ddb_4b454e614fb1.slice/crio-49ad63361ba0988caee1ae5dc9d69014b6595f87f076887dbb206ee08a4937c1 WatchSource:0}: Error finding container 49ad63361ba0988caee1ae5dc9d69014b6595f87f076887dbb206ee08a4937c1: Status 404 returned error can't find the container with id 49ad63361ba0988caee1ae5dc9d69014b6595f87f076887dbb206ee08a4937c1 Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.180064 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73360e1e-70eb-499b-b3a1-cd9bde6ac466","Type":"ContainerStarted","Data":"6d18fe0dc626a04cde8297a6858be1c65f79291c8b74cdf794ee49139f4dd8e2"} Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.186292 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mb84g"] Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.187689 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.709765953 podStartE2EDuration="56.187677797s" podCreationTimestamp="2026-02-04 08:56:47 +0000 UTC" firstStartedPulling="2026-02-04 08:57:11.913698462 +0000 UTC m=+941.953756227" lastFinishedPulling="2026-02-04 08:57:42.391610316 +0000 UTC m=+972.431668071" observedRunningTime="2026-02-04 08:57:43.166318373 +0000 UTC m=+973.206376128" watchObservedRunningTime="2026-02-04 08:57:43.187677797 +0000 UTC m=+973.227735552" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.206366 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.304058 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371972.550741 podStartE2EDuration="1m4.304033874s" podCreationTimestamp="2026-02-04 08:56:39 +0000 UTC" firstStartedPulling="2026-02-04 08:56:42.18166893 +0000 UTC m=+912.221726685" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:43.235351754 +0000 UTC m=+973.275409509" watchObservedRunningTime="2026-02-04 08:57:43.304033874 +0000 UTC m=+973.344091629" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.338063 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.813031613 podStartE2EDuration="1m1.338037885s" podCreationTimestamp="2026-02-04 08:56:42 +0000 UTC" firstStartedPulling="2026-02-04 08:56:43.799783254 +0000 UTC m=+913.839841009" lastFinishedPulling="2026-02-04 08:57:42.324789526 +0000 UTC m=+972.364847281" observedRunningTime="2026-02-04 08:57:43.298104342 +0000 UTC m=+973.338162097" watchObservedRunningTime="2026-02-04 08:57:43.338037885 +0000 UTC m=+973.378095640" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.352214 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=34.952517697 podStartE2EDuration="52.352190483s" podCreationTimestamp="2026-02-04 08:56:51 +0000 UTC" firstStartedPulling="2026-02-04 08:57:10.971404416 +0000 UTC m=+941.011462171" lastFinishedPulling="2026-02-04 08:57:28.371077182 +0000 UTC m=+958.411134957" observedRunningTime="2026-02-04 08:57:43.327840107 +0000 UTC m=+973.367897862" watchObservedRunningTime="2026-02-04 08:57:43.352190483 +0000 UTC m=+973.392248238" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.395576 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.417166 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.503241 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.703845 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.713834 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.806909 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config\") pod \"01ad9ec9-e651-467e-a602-98a3ede6b550\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.807228 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc\") pod \"01ad9ec9-e651-467e-a602-98a3ede6b550\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.807355 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srrrv\" (UniqueName: \"kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv\") pod \"01ad9ec9-e651-467e-a602-98a3ede6b550\" (UID: \"01ad9ec9-e651-467e-a602-98a3ede6b550\") " Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.814562 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv" (OuterVolumeSpecName: "kube-api-access-srrrv") pod "01ad9ec9-e651-467e-a602-98a3ede6b550" (UID: "01ad9ec9-e651-467e-a602-98a3ede6b550"). InnerVolumeSpecName "kube-api-access-srrrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.833600 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01ad9ec9-e651-467e-a602-98a3ede6b550" (UID: "01ad9ec9-e651-467e-a602-98a3ede6b550"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.860489 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config" (OuterVolumeSpecName: "config") pod "01ad9ec9-e651-467e-a602-98a3ede6b550" (UID: "01ad9ec9-e651-467e-a602-98a3ede6b550"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.910028 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.910057 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01ad9ec9-e651-467e-a602-98a3ede6b550-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:43 crc kubenswrapper[4644]: I0204 08:57:43.910067 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srrrv\" (UniqueName: \"kubernetes.io/projected/01ad9ec9-e651-467e-a602-98a3ede6b550-kube-api-access-srrrv\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.005715 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.112161 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config\") pod \"36865c3a-9c0b-4853-a08d-c56af12dd154\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.112288 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc\") pod \"36865c3a-9c0b-4853-a08d-c56af12dd154\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.112567 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jnvh\" (UniqueName: \"kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh\") pod \"36865c3a-9c0b-4853-a08d-c56af12dd154\" (UID: \"36865c3a-9c0b-4853-a08d-c56af12dd154\") " Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.119539 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh" (OuterVolumeSpecName: "kube-api-access-6jnvh") pod "36865c3a-9c0b-4853-a08d-c56af12dd154" (UID: "36865c3a-9c0b-4853-a08d-c56af12dd154"). InnerVolumeSpecName "kube-api-access-6jnvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.131655 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36865c3a-9c0b-4853-a08d-c56af12dd154" (UID: "36865c3a-9c0b-4853-a08d-c56af12dd154"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.143874 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config" (OuterVolumeSpecName: "config") pod "36865c3a-9c0b-4853-a08d-c56af12dd154" (UID: "36865c3a-9c0b-4853-a08d-c56af12dd154"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.186922 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42467ee2-0414-443d-96c2-61b4118dd8d6","Type":"ContainerStarted","Data":"3132e04b42feeff5fc7d467cdf857222f933db6a8a61271a9cb3a25d685adc2f"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.186993 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.189088 4644 generic.go:334] "Generic (PLEG): container finished" podID="01ad9ec9-e651-467e-a602-98a3ede6b550" containerID="8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd" exitCode=0 Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.189153 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" event={"ID":"01ad9ec9-e651-467e-a602-98a3ede6b550","Type":"ContainerDied","Data":"8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.189177 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" event={"ID":"01ad9ec9-e651-467e-a602-98a3ede6b550","Type":"ContainerDied","Data":"71248621349e8b3fc051052871a1099592e87c7681ee932860a27aa3530773e7"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.189173 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ct7pm" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.189192 4644 scope.go:117] "RemoveContainer" containerID="8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.190906 4644 generic.go:334] "Generic (PLEG): container finished" podID="b953e99a-7dbd-4300-95d5-844c241e3207" containerID="7f7bb30701847826ebefb75ed7a1ed7777fee763c2d3dbcd827a43fb497ee99c" exitCode=0 Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.190947 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-95k6h" event={"ID":"b953e99a-7dbd-4300-95d5-844c241e3207","Type":"ContainerDied","Data":"7f7bb30701847826ebefb75ed7a1ed7777fee763c2d3dbcd827a43fb497ee99c"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.190963 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-95k6h" event={"ID":"b953e99a-7dbd-4300-95d5-844c241e3207","Type":"ContainerStarted","Data":"af9cad6c489266faf4335a3144da505ed67a4130312e255a9eb85495af4d30b0"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.194314 4644 generic.go:334] "Generic (PLEG): container finished" podID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerID="b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729" exitCode=0 Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.194395 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" event={"ID":"539a7b01-25e6-49ec-8d04-e743c92ed53f","Type":"ContainerDied","Data":"b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.194415 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" event={"ID":"539a7b01-25e6-49ec-8d04-e743c92ed53f","Type":"ContainerStarted","Data":"25567c16a8df97b489fc1cb541fd6796a9c9812d4ad66c0a1f1d3c1269815234"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.196615 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerStarted","Data":"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.197425 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.204814 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" event={"ID":"36865c3a-9c0b-4853-a08d-c56af12dd154","Type":"ContainerDied","Data":"2a2850a08786915b6125ab5995fd59945732d95aa38fd1cd1a95b8a8e8e821e5"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.204922 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2t89k" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.206448 4644 scope.go:117] "RemoveContainer" containerID="8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd" Feb 04 08:57:44 crc kubenswrapper[4644]: E0204 08:57:44.208769 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd\": container with ID starting with 8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd not found: ID does not exist" containerID="8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.208822 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd"} err="failed to get container status \"8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd\": rpc error: code = NotFound desc = could not find container \"8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd\": container with ID starting with 8e243c4511eba7000cdb46d4c0faa92aebf976707ed222e9e5ec5cf1502304dd not found: ID does not exist" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.208848 4644 scope.go:117] "RemoveContainer" containerID="e4345d8965cda3ed7b11ade6ab1dffd463c74eaaede2db68546025ba61bcb639" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.217890 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jnvh\" (UniqueName: \"kubernetes.io/projected/36865c3a-9c0b-4853-a08d-c56af12dd154-kube-api-access-6jnvh\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.217927 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.217943 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36865c3a-9c0b-4853-a08d-c56af12dd154-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.221415 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f","Type":"ContainerStarted","Data":"dc5ce8e06c311c12b7079096e418e2345ddf27153bfce172f678b7d7f6da6e45"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.227853 4644 generic.go:334] "Generic (PLEG): container finished" podID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerID="41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c" exitCode=0 Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.227939 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerDied","Data":"41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.227967 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerStarted","Data":"ccdb3ed4a6448042f0e8f26dd935caef3175886a0f075fb40dbf69d1eb2d3019"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.240164 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mb84g" event={"ID":"b4cecbc7-4505-46d1-8ddb-4b454e614fb1","Type":"ContainerStarted","Data":"490c9c9f0dba83377df3b1ec8274bc3047baab8fde297b8f10b76d0209fcb73f"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.240215 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mb84g" event={"ID":"b4cecbc7-4505-46d1-8ddb-4b454e614fb1","Type":"ContainerStarted","Data":"49ad63361ba0988caee1ae5dc9d69014b6595f87f076887dbb206ee08a4937c1"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.244711 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.14670357 podStartE2EDuration="1m0.244688773s" podCreationTimestamp="2026-02-04 08:56:44 +0000 UTC" firstStartedPulling="2026-02-04 08:57:10.491497393 +0000 UTC m=+940.531555148" lastFinishedPulling="2026-02-04 08:57:42.589482596 +0000 UTC m=+972.629540351" observedRunningTime="2026-02-04 08:57:44.20952031 +0000 UTC m=+974.249578075" watchObservedRunningTime="2026-02-04 08:57:44.244688773 +0000 UTC m=+974.284746528" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.259761 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerStarted","Data":"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa"} Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.260880 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.264983 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.141778851 podStartE2EDuration="1m6.264960288s" podCreationTimestamp="2026-02-04 08:56:38 +0000 UTC" firstStartedPulling="2026-02-04 08:56:40.669360175 +0000 UTC m=+910.709417930" lastFinishedPulling="2026-02-04 08:57:09.792541612 +0000 UTC m=+939.832599367" observedRunningTime="2026-02-04 08:57:44.254627355 +0000 UTC m=+974.294685130" watchObservedRunningTime="2026-02-04 08:57:44.264960288 +0000 UTC m=+974.305018043" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.387479 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 08:57:44 crc kubenswrapper[4644]: E0204 08:57:44.387888 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.387905 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: E0204 08:57:44.387964 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.387973 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.388151 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.388165 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" containerName="init" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.389546 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.427811 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.442212 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.512513 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.525737 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.525814 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhgm\" (UniqueName: \"kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.525919 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.532084 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2t89k"] Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.622821 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.626773 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.626823 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.626862 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhgm\" (UniqueName: \"kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.627471 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.627676 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.642969 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ct7pm"] Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.649546 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.149608157 podStartE2EDuration="1m6.649520626s" podCreationTimestamp="2026-02-04 08:56:38 +0000 UTC" firstStartedPulling="2026-02-04 08:56:40.208636847 +0000 UTC m=+910.248694602" lastFinishedPulling="2026-02-04 08:57:08.708549316 +0000 UTC m=+938.748607071" observedRunningTime="2026-02-04 08:57:44.631050291 +0000 UTC m=+974.671108046" watchObservedRunningTime="2026-02-04 08:57:44.649520626 +0000 UTC m=+974.689578381" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.679531 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhgm\" (UniqueName: \"kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm\") pod \"redhat-operators-lffcr\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.681007 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mb84g" podStartSLOduration=10.680990668 podStartE2EDuration="10.680990668s" podCreationTimestamp="2026-02-04 08:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:44.679756274 +0000 UTC m=+974.719814039" watchObservedRunningTime="2026-02-04 08:57:44.680990668 +0000 UTC m=+974.721048423" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.721662 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ad9ec9-e651-467e-a602-98a3ede6b550" path="/var/lib/kubelet/pods/01ad9ec9-e651-467e-a602-98a3ede6b550/volumes" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.733651 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36865c3a-9c0b-4853-a08d-c56af12dd154" path="/var/lib/kubelet/pods/36865c3a-9c0b-4853-a08d-c56af12dd154/volumes" Feb 04 08:57:44 crc kubenswrapper[4644]: I0204 08:57:44.790753 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.278131 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-95k6h" event={"ID":"b953e99a-7dbd-4300-95d5-844c241e3207","Type":"ContainerStarted","Data":"5a3215b2fed3b8118828da8d3431f626f6777f4283572dba9de14b487258df8c"} Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.278489 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.284294 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" event={"ID":"539a7b01-25e6-49ec-8d04-e743c92ed53f","Type":"ContainerStarted","Data":"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb"} Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.374744 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.469870 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-95k6h" podStartSLOduration=10.469852285 podStartE2EDuration="10.469852285s" podCreationTimestamp="2026-02-04 08:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:45.383034198 +0000 UTC m=+975.423091953" watchObservedRunningTime="2026-02-04 08:57:45.469852285 +0000 UTC m=+975.509910050" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.471127 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.484261 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" podStartSLOduration=10.484223448 podStartE2EDuration="10.484223448s" podCreationTimestamp="2026-02-04 08:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:45.477139435 +0000 UTC m=+975.517197200" watchObservedRunningTime="2026-02-04 08:57:45.484223448 +0000 UTC m=+975.524281203" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.703638 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 04 08:57:45 crc kubenswrapper[4644]: I0204 08:57:45.798999 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.294710 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerStarted","Data":"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493"} Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.296993 4644 generic.go:334] "Generic (PLEG): container finished" podID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerID="7e0e57a9a6a893193c143cdd7320e95a786469fa80310b6f4805b0006f7e2c02" exitCode=0 Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.297631 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerDied","Data":"7e0e57a9a6a893193c143cdd7320e95a786469fa80310b6f4805b0006f7e2c02"} Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.297658 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerStarted","Data":"4dca71096204f8c3867ab60eb54e52b8e32e90d45da37044a2d9343ad52274df"} Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.386813 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.755884 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.757427 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.769658 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gsqgh" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.769792 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.769839 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.769664 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.792452 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876123 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-scripts\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876214 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-config\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876603 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876695 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876763 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.876815 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p898m\" (UniqueName: \"kubernetes.io/projected/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-kube-api-access-p898m\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.977960 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978024 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978063 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p898m\" (UniqueName: \"kubernetes.io/projected/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-kube-api-access-p898m\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978110 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978140 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-scripts\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978164 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-config\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.978219 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.981243 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-scripts\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.981583 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.982277 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-config\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.989429 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.992827 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.994601 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:46 crc kubenswrapper[4644]: I0204 08:57:46.998148 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p898m\" (UniqueName: \"kubernetes.io/projected/92ea26d9-2316-4fe5-b998-ed9fa22e6a2a-kube-api-access-p898m\") pod \"ovn-northd-0\" (UID: \"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a\") " pod="openstack/ovn-northd-0" Feb 04 08:57:47 crc kubenswrapper[4644]: I0204 08:57:47.085948 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 04 08:57:47 crc kubenswrapper[4644]: I0204 08:57:47.312707 4644 generic.go:334] "Generic (PLEG): container finished" podID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerID="8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493" exitCode=0 Feb 04 08:57:47 crc kubenswrapper[4644]: I0204 08:57:47.314248 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerDied","Data":"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493"} Feb 04 08:57:47 crc kubenswrapper[4644]: I0204 08:57:47.692093 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 04 08:57:48 crc kubenswrapper[4644]: I0204 08:57:48.256516 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 04 08:57:48 crc kubenswrapper[4644]: I0204 08:57:48.326286 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a","Type":"ContainerStarted","Data":"4c85ae000b64f69cc7c7037fcfb2743ce7beaa3e5442d1a161c53f56a9ec4bdc"} Feb 04 08:57:48 crc kubenswrapper[4644]: I0204 08:57:48.686952 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8nfv7" podUID="964cdd6e-b29a-401d-9bb0-3375b663a899" containerName="ovn-controller" probeResult="failure" output=< Feb 04 08:57:48 crc kubenswrapper[4644]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 04 08:57:48 crc kubenswrapper[4644]: > Feb 04 08:57:48 crc kubenswrapper[4644]: I0204 08:57:48.876031 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:57:48 crc kubenswrapper[4644]: I0204 08:57:48.889002 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zzhbv" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.312653 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8nfv7-config-96v84"] Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.313523 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.320598 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.340264 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerStarted","Data":"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60"} Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.346214 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerStarted","Data":"3e2b894654072ee56f417e213a689edecdf956aff0fec27df233f9b700e455b1"} Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.351069 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8nfv7-config-96v84"] Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423225 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423268 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423347 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dmh\" (UniqueName: \"kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423416 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.423445 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.475304 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrbhk" podStartSLOduration=5.385530796 podStartE2EDuration="9.475285213s" podCreationTimestamp="2026-02-04 08:57:40 +0000 UTC" firstStartedPulling="2026-02-04 08:57:44.241375523 +0000 UTC m=+974.281433278" lastFinishedPulling="2026-02-04 08:57:48.33112994 +0000 UTC m=+978.371187695" observedRunningTime="2026-02-04 08:57:49.469833634 +0000 UTC m=+979.509891399" watchObservedRunningTime="2026-02-04 08:57:49.475285213 +0000 UTC m=+979.515342968" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525008 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525084 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525147 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525228 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525260 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.525355 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69dmh\" (UniqueName: \"kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.526057 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.526987 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.527064 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.527120 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.530097 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.556015 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69dmh\" (UniqueName: \"kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh\") pod \"ovn-controller-8nfv7-config-96v84\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.635605 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:49 crc kubenswrapper[4644]: I0204 08:57:49.844754 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.356894 4644 generic.go:334] "Generic (PLEG): container finished" podID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerID="3e2b894654072ee56f417e213a689edecdf956aff0fec27df233f9b700e455b1" exitCode=0 Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.357023 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerDied","Data":"3e2b894654072ee56f417e213a689edecdf956aff0fec27df233f9b700e455b1"} Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.375459 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.640582 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.807455 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.993689 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:50 crc kubenswrapper[4644]: I0204 08:57:50.993857 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.159689 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8nfv7-config-96v84"] Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.313010 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.313049 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.388910 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerStarted","Data":"4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a"} Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.394239 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7-config-96v84" event={"ID":"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8","Type":"ContainerStarted","Data":"deba59e976c7a5a10feb95529ef5a3c2d2c556b6eaa1d7c7df4e155138caf44f"} Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.402398 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a","Type":"ContainerStarted","Data":"69cec26d565cc8e7d18dc91ac8d70b06469612e604c33bd189bb1b5c8540e55e"} Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.402443 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"92ea26d9-2316-4fe5-b998-ed9fa22e6a2a","Type":"ContainerStarted","Data":"b08628eab779f30eba3c057c5d6df2366a7cda53566435107cb652e0b80ffd45"} Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.402462 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.402598 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="dnsmasq-dns" containerID="cri-o://113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb" gracePeriod=10 Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.507246 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.803218094 podStartE2EDuration="5.507232552s" podCreationTimestamp="2026-02-04 08:57:46 +0000 UTC" firstStartedPulling="2026-02-04 08:57:47.681957467 +0000 UTC m=+977.722015222" lastFinishedPulling="2026-02-04 08:57:50.385971925 +0000 UTC m=+980.426029680" observedRunningTime="2026-02-04 08:57:51.50604239 +0000 UTC m=+981.546100145" watchObservedRunningTime="2026-02-04 08:57:51.507232552 +0000 UTC m=+981.547290308" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.512998 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lffcr" podStartSLOduration=2.95375646 podStartE2EDuration="7.51297959s" podCreationTimestamp="2026-02-04 08:57:44 +0000 UTC" firstStartedPulling="2026-02-04 08:57:46.298679106 +0000 UTC m=+976.338736861" lastFinishedPulling="2026-02-04 08:57:50.857902236 +0000 UTC m=+980.897959991" observedRunningTime="2026-02-04 08:57:51.446549402 +0000 UTC m=+981.486607157" watchObservedRunningTime="2026-02-04 08:57:51.51297959 +0000 UTC m=+981.553037345" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.737471 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-77jxs"] Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.738551 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.752441 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-77jxs"] Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.752606 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.770577 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bxd\" (UniqueName: \"kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.770646 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.872226 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4bxd\" (UniqueName: \"kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.872317 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.873412 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:51 crc kubenswrapper[4644]: I0204 08:57:51.902407 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4bxd\" (UniqueName: \"kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd\") pod \"root-account-create-update-77jxs\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.102353 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.206514 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:57:52 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:57:52 crc kubenswrapper[4644]: > Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.317391 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.380822 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb\") pod \"539a7b01-25e6-49ec-8d04-e743c92ed53f\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.381426 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc\") pod \"539a7b01-25e6-49ec-8d04-e743c92ed53f\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.381530 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnsp\" (UniqueName: \"kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp\") pod \"539a7b01-25e6-49ec-8d04-e743c92ed53f\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.381581 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config\") pod \"539a7b01-25e6-49ec-8d04-e743c92ed53f\" (UID: \"539a7b01-25e6-49ec-8d04-e743c92ed53f\") " Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.393600 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp" (OuterVolumeSpecName: "kube-api-access-mfnsp") pod "539a7b01-25e6-49ec-8d04-e743c92ed53f" (UID: "539a7b01-25e6-49ec-8d04-e743c92ed53f"). InnerVolumeSpecName "kube-api-access-mfnsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.435561 4644 generic.go:334] "Generic (PLEG): container finished" podID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerID="113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb" exitCode=0 Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.435652 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" event={"ID":"539a7b01-25e6-49ec-8d04-e743c92ed53f","Type":"ContainerDied","Data":"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb"} Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.435682 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" event={"ID":"539a7b01-25e6-49ec-8d04-e743c92ed53f","Type":"ContainerDied","Data":"25567c16a8df97b489fc1cb541fd6796a9c9812d4ad66c0a1f1d3c1269815234"} Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.435702 4644 scope.go:117] "RemoveContainer" containerID="113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.435841 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-f8tfw" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.448365 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7-config-96v84" event={"ID":"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8","Type":"ContainerStarted","Data":"4ffc14471304be242059cd8c6c74093f200e5f50d92116b8e2e8104093fe4aa0"} Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.474981 4644 scope.go:117] "RemoveContainer" containerID="b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.484953 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnsp\" (UniqueName: \"kubernetes.io/projected/539a7b01-25e6-49ec-8d04-e743c92ed53f-kube-api-access-mfnsp\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.495093 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "539a7b01-25e6-49ec-8d04-e743c92ed53f" (UID: "539a7b01-25e6-49ec-8d04-e743c92ed53f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.500741 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8nfv7-config-96v84" podStartSLOduration=3.500719272 podStartE2EDuration="3.500719272s" podCreationTimestamp="2026-02-04 08:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:52.483907392 +0000 UTC m=+982.523965147" watchObservedRunningTime="2026-02-04 08:57:52.500719272 +0000 UTC m=+982.540777027" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.501584 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "539a7b01-25e6-49ec-8d04-e743c92ed53f" (UID: "539a7b01-25e6-49ec-8d04-e743c92ed53f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.504833 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config" (OuterVolumeSpecName: "config") pod "539a7b01-25e6-49ec-8d04-e743c92ed53f" (UID: "539a7b01-25e6-49ec-8d04-e743c92ed53f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.526253 4644 scope.go:117] "RemoveContainer" containerID="113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb" Feb 04 08:57:52 crc kubenswrapper[4644]: E0204 08:57:52.527564 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb\": container with ID starting with 113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb not found: ID does not exist" containerID="113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.527595 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb"} err="failed to get container status \"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb\": rpc error: code = NotFound desc = could not find container \"113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb\": container with ID starting with 113ad60f71f5f791f25ddd79f6799abcc2870f00e2b288ed4bbcafc738469dcb not found: ID does not exist" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.527645 4644 scope.go:117] "RemoveContainer" containerID="b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729" Feb 04 08:57:52 crc kubenswrapper[4644]: E0204 08:57:52.529921 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729\": container with ID starting with b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729 not found: ID does not exist" containerID="b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.529947 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729"} err="failed to get container status \"b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729\": rpc error: code = NotFound desc = could not find container \"b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729\": container with ID starting with b0b786460ecd8837e16e418d03bcfab2c99048705d278330f51d12a671390729 not found: ID does not exist" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.586216 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.586249 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.586259 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/539a7b01-25e6-49ec-8d04-e743c92ed53f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.765303 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.785889 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-f8tfw"] Feb 04 08:57:52 crc kubenswrapper[4644]: I0204 08:57:52.893568 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-77jxs"] Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.457286 4644 generic.go:334] "Generic (PLEG): container finished" podID="70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" containerID="4ffc14471304be242059cd8c6c74093f200e5f50d92116b8e2e8104093fe4aa0" exitCode=0 Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.457516 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7-config-96v84" event={"ID":"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8","Type":"ContainerDied","Data":"4ffc14471304be242059cd8c6c74093f200e5f50d92116b8e2e8104093fe4aa0"} Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.460115 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-77jxs" event={"ID":"675bfb30-a6b7-4900-aada-393599154da1","Type":"ContainerStarted","Data":"5a02b5a6c91e5d2c7a931d70fe94b6118cb29fcb86493b1411fd7f96520a428a"} Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.460150 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-77jxs" event={"ID":"675bfb30-a6b7-4900-aada-393599154da1","Type":"ContainerStarted","Data":"37c47d17cee1f4ff3f2a3fbf66663eddaf37b1fa77b9e96400db695e07d51d0e"} Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.506212 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-77jxs" podStartSLOduration=2.506185799 podStartE2EDuration="2.506185799s" podCreationTimestamp="2026-02-04 08:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:53.500594386 +0000 UTC m=+983.540652151" watchObservedRunningTime="2026-02-04 08:57:53.506185799 +0000 UTC m=+983.546243554" Feb 04 08:57:53 crc kubenswrapper[4644]: I0204 08:57:53.588772 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8nfv7" Feb 04 08:57:54 crc kubenswrapper[4644]: I0204 08:57:54.468835 4644 generic.go:334] "Generic (PLEG): container finished" podID="675bfb30-a6b7-4900-aada-393599154da1" containerID="5a02b5a6c91e5d2c7a931d70fe94b6118cb29fcb86493b1411fd7f96520a428a" exitCode=0 Feb 04 08:57:54 crc kubenswrapper[4644]: I0204 08:57:54.469591 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-77jxs" event={"ID":"675bfb30-a6b7-4900-aada-393599154da1","Type":"ContainerDied","Data":"5a02b5a6c91e5d2c7a931d70fe94b6118cb29fcb86493b1411fd7f96520a428a"} Feb 04 08:57:54 crc kubenswrapper[4644]: I0204 08:57:54.670725 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" path="/var/lib/kubelet/pods/539a7b01-25e6-49ec-8d04-e743c92ed53f/volumes" Feb 04 08:57:54 crc kubenswrapper[4644]: I0204 08:57:54.793310 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:54 crc kubenswrapper[4644]: I0204 08:57:54.794367 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.110993 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.263741 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.455754 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.455856 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run" (OuterVolumeSpecName: "var-run") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.455929 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456009 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456101 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456054 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456144 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69dmh\" (UniqueName: \"kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456275 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts\") pod \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\" (UID: \"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8\") " Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456090 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456849 4644 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456872 4644 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.456883 4644 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.458211 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts" (OuterVolumeSpecName: "scripts") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.465100 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.485607 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh" (OuterVolumeSpecName: "kube-api-access-69dmh") pod "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" (UID: "70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8"). InnerVolumeSpecName "kube-api-access-69dmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.507440 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8nfv7-config-96v84" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.511426 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8nfv7-config-96v84" event={"ID":"70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8","Type":"ContainerDied","Data":"deba59e976c7a5a10feb95529ef5a3c2d2c556b6eaa1d7c7df4e155138caf44f"} Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.511518 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deba59e976c7a5a10feb95529ef5a3c2d2c556b6eaa1d7c7df4e155138caf44f" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.550097 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.568594 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.568876 4644 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.568993 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69dmh\" (UniqueName: \"kubernetes.io/projected/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8-kube-api-access-69dmh\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.569629 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:57:55 crc kubenswrapper[4644]: E0204 08:57:55.570283 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="dnsmasq-dns" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.570304 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="dnsmasq-dns" Feb 04 08:57:55 crc kubenswrapper[4644]: E0204 08:57:55.570347 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="init" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.570356 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="init" Feb 04 08:57:55 crc kubenswrapper[4644]: E0204 08:57:55.570380 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" containerName="ovn-config" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.570390 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" containerName="ovn-config" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.570675 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="539a7b01-25e6-49ec-8d04-e743c92ed53f" containerName="dnsmasq-dns" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.570703 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" containerName="ovn-config" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.604158 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.635462 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.672106 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.672536 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.672707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vr6\" (UniqueName: \"kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.672818 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.673526 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.698586 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.777956 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.778633 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.778664 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vr6\" (UniqueName: \"kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.778697 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.778764 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.779057 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.779669 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.780281 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.780360 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.823123 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vr6\" (UniqueName: \"kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6\") pod \"dnsmasq-dns-b8fbc5445-pdrwn\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.941215 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:55 crc kubenswrapper[4644]: I0204 08:57:55.959253 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:57:55 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:57:55 crc kubenswrapper[4644]: > Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.024475 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.185872 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4bxd\" (UniqueName: \"kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd\") pod \"675bfb30-a6b7-4900-aada-393599154da1\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.186275 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts\") pod \"675bfb30-a6b7-4900-aada-393599154da1\" (UID: \"675bfb30-a6b7-4900-aada-393599154da1\") " Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.187035 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "675bfb30-a6b7-4900-aada-393599154da1" (UID: "675bfb30-a6b7-4900-aada-393599154da1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.191522 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd" (OuterVolumeSpecName: "kube-api-access-d4bxd") pod "675bfb30-a6b7-4900-aada-393599154da1" (UID: "675bfb30-a6b7-4900-aada-393599154da1"). InnerVolumeSpecName "kube-api-access-d4bxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.288444 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4bxd\" (UniqueName: \"kubernetes.io/projected/675bfb30-a6b7-4900-aada-393599154da1-kube-api-access-d4bxd\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.288474 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675bfb30-a6b7-4900-aada-393599154da1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.411040 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8nfv7-config-96v84"] Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.426587 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8nfv7-config-96v84"] Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.547987 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-77jxs" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.549480 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-77jxs" event={"ID":"675bfb30-a6b7-4900-aada-393599154da1","Type":"ContainerDied","Data":"37c47d17cee1f4ff3f2a3fbf66663eddaf37b1fa77b9e96400db695e07d51d0e"} Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.549568 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c47d17cee1f4ff3f2a3fbf66663eddaf37b1fa77b9e96400db695e07d51d0e" Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.604563 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:57:56 crc kubenswrapper[4644]: I0204 08:57:56.674705 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8" path="/var/lib/kubelet/pods/70cc5f1c-3688-442f-bf3b-5d04ea1fd9e8/volumes" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.013863 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.014454 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675bfb30-a6b7-4900-aada-393599154da1" containerName="mariadb-account-create-update" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.014472 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="675bfb30-a6b7-4900-aada-393599154da1" containerName="mariadb-account-create-update" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.014639 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="675bfb30-a6b7-4900-aada-393599154da1" containerName="mariadb-account-create-update" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.019565 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.022091 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.022354 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.022555 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.027246 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fxc6d" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.039517 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100471 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1344aa43-93ef-4780-a56d-3eb89d55b1a2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100529 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-cache\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100566 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-lock\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100662 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100686 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.100744 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6bc\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-kube-api-access-5g6bc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202131 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-lock\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202269 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202296 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202356 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6bc\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-kube-api-access-5g6bc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202412 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1344aa43-93ef-4780-a56d-3eb89d55b1a2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202440 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-cache\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.202549 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.202575 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.202653 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:57:57.702613918 +0000 UTC m=+987.742671733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202706 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-lock\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202900 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.202977 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1344aa43-93ef-4780-a56d-3eb89d55b1a2-cache\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.207253 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1344aa43-93ef-4780-a56d-3eb89d55b1a2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.228199 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.235278 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6bc\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-kube-api-access-5g6bc\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.560752 4644 generic.go:334] "Generic (PLEG): container finished" podID="42473ac1-38e6-4651-9f0b-13df0950127d" containerID="44f8f8943055c7a7205644446d00e075a04a012288f773bc8ca383cef0d72a41" exitCode=0 Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.560809 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" event={"ID":"42473ac1-38e6-4651-9f0b-13df0950127d","Type":"ContainerDied","Data":"44f8f8943055c7a7205644446d00e075a04a012288f773bc8ca383cef0d72a41"} Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.560837 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" event={"ID":"42473ac1-38e6-4651-9f0b-13df0950127d","Type":"ContainerStarted","Data":"f679674152936802fe47f9757721898a5da7d97927c9d4e277ad7ba7c74a66c8"} Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.605462 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sltjx"] Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.614267 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.654673 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.655951 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.657639 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.681868 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sltjx"] Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711590 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711659 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711693 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711736 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711762 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711785 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711822 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttn47\" (UniqueName: \"kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.711848 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.713347 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.713366 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: E0204 08:57:57.713409 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:57:58.713391542 +0000 UTC m=+988.753449297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813048 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813103 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813147 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813164 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813181 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813211 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttn47\" (UniqueName: \"kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.813227 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.814875 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.814917 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.815452 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.817801 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.818075 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.821910 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:57 crc kubenswrapper[4644]: I0204 08:57:57.835063 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttn47\" (UniqueName: \"kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47\") pod \"swift-ring-rebalance-sltjx\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.008852 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.293893 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sltjx"] Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.579840 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" event={"ID":"42473ac1-38e6-4651-9f0b-13df0950127d","Type":"ContainerStarted","Data":"6279bc2710aafb11c8399cc2671efe9db0b87443ab72be2381b53e8052a336ba"} Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.580182 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.581253 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sltjx" event={"ID":"5a843b53-7ea4-48d9-9c8a-16be734d66c6","Type":"ContainerStarted","Data":"a69103c60c78256c1bff4d0b57970ca85bd18a17f3c33a096c7393c4a8d11774"} Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.604170 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podStartSLOduration=3.604146819 podStartE2EDuration="3.604146819s" podCreationTimestamp="2026-02-04 08:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:57:58.599192603 +0000 UTC m=+988.639250358" watchObservedRunningTime="2026-02-04 08:57:58.604146819 +0000 UTC m=+988.644204584" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.726875 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:57:58 crc kubenswrapper[4644]: E0204 08:57:58.727109 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:57:58 crc kubenswrapper[4644]: E0204 08:57:58.727135 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:57:58 crc kubenswrapper[4644]: E0204 08:57:58.727188 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:58:00.727168897 +0000 UTC m=+990.767226662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.801287 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gspgp"] Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.802550 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gspgp" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.807367 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gspgp"] Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.926132 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7336-account-create-update-xwl7d"] Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.927508 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.929735 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.939469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.939594 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vvc\" (UniqueName: \"kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:58 crc kubenswrapper[4644]: I0204 08:57:58.941723 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7336-account-create-update-xwl7d"] Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.041514 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.041589 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.041619 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68xv\" (UniqueName: \"kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.041645 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vvc\" (UniqueName: \"kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.042405 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.072903 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2vvc\" (UniqueName: \"kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc\") pod \"glance-db-create-gspgp\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " pod="openstack/glance-db-create-gspgp" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.137193 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gspgp" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.143854 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68xv\" (UniqueName: \"kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.144172 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.170857 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68xv\" (UniqueName: \"kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.218659 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts\") pod \"glance-7336-account-create-update-xwl7d\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.270899 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:57:59 crc kubenswrapper[4644]: W0204 08:57:59.634884 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod336812a1_9d89_4471_a974_e04f21404612.slice/crio-c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a WatchSource:0}: Error finding container c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a: Status 404 returned error can't find the container with id c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.637957 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gspgp"] Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.845734 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Feb 04 08:57:59 crc kubenswrapper[4644]: I0204 08:57:59.877298 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7336-account-create-update-xwl7d"] Feb 04 08:57:59 crc kubenswrapper[4644]: W0204 08:57:59.884767 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbea064f_1305_4798_b660_7d3aa50cb6a2.slice/crio-ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c WatchSource:0}: Error finding container ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c: Status 404 returned error can't find the container with id ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.016716 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-77jxs"] Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.020383 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-77jxs"] Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.110727 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-59svq"] Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.112362 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.114585 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.131626 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59svq"] Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.165536 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.165600 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqrg\" (UniqueName: \"kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.166048 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.266992 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqrg\" (UniqueName: \"kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.267214 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.269868 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.306008 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqrg\" (UniqueName: \"kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg\") pod \"root-account-create-update-59svq\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.435516 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59svq" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.621051 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7336-account-create-update-xwl7d" event={"ID":"dbea064f-1305-4798-b660-7d3aa50cb6a2","Type":"ContainerStarted","Data":"86ee48f2731172d403f473ebbb3ba38a69dc40396ca65823bfbf5730cbeb18f2"} Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.621097 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7336-account-create-update-xwl7d" event={"ID":"dbea064f-1305-4798-b660-7d3aa50cb6a2","Type":"ContainerStarted","Data":"ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c"} Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.636543 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gspgp" event={"ID":"336812a1-9d89-4471-a974-e04f21404612","Type":"ContainerStarted","Data":"14902e2f24d6274b974c91458f166caf00bc6dd0d1f7858fddafec27c6e4e6b4"} Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.636598 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gspgp" event={"ID":"336812a1-9d89-4471-a974-e04f21404612","Type":"ContainerStarted","Data":"c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a"} Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.648660 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7336-account-create-update-xwl7d" podStartSLOduration=2.648646041 podStartE2EDuration="2.648646041s" podCreationTimestamp="2026-02-04 08:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:00.646254337 +0000 UTC m=+990.686312092" watchObservedRunningTime="2026-02-04 08:58:00.648646041 +0000 UTC m=+990.688703796" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.672043 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675bfb30-a6b7-4900-aada-393599154da1" path="/var/lib/kubelet/pods/675bfb30-a6b7-4900-aada-393599154da1/volumes" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.692933 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gspgp" podStartSLOduration=2.692917853 podStartE2EDuration="2.692917853s" podCreationTimestamp="2026-02-04 08:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:00.682253201 +0000 UTC m=+990.722310956" watchObservedRunningTime="2026-02-04 08:58:00.692917853 +0000 UTC m=+990.732975608" Feb 04 08:58:00 crc kubenswrapper[4644]: I0204 08:58:00.804240 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:58:00 crc kubenswrapper[4644]: E0204 08:58:00.804432 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:58:00 crc kubenswrapper[4644]: E0204 08:58:00.804445 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:58:00 crc kubenswrapper[4644]: E0204 08:58:00.804487 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:58:04.804473328 +0000 UTC m=+994.844531083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:58:01 crc kubenswrapper[4644]: I0204 08:58:01.236971 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59svq"] Feb 04 08:58:01 crc kubenswrapper[4644]: I0204 08:58:01.645314 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59svq" event={"ID":"36db8322-5f0d-474e-baad-17e410c8c9f2","Type":"ContainerStarted","Data":"bde3d5f7d48bb1a2893c38582baba27a1b038e3b65e10c70a11c359ae78f8691"} Feb 04 08:58:01 crc kubenswrapper[4644]: I0204 08:58:01.650911 4644 generic.go:334] "Generic (PLEG): container finished" podID="336812a1-9d89-4471-a974-e04f21404612" containerID="14902e2f24d6274b974c91458f166caf00bc6dd0d1f7858fddafec27c6e4e6b4" exitCode=0 Feb 04 08:58:01 crc kubenswrapper[4644]: I0204 08:58:01.651984 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gspgp" event={"ID":"336812a1-9d89-4471-a974-e04f21404612","Type":"ContainerDied","Data":"14902e2f24d6274b974c91458f166caf00bc6dd0d1f7858fddafec27c6e4e6b4"} Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.058601 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:02 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:02 crc kubenswrapper[4644]: > Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.658048 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59svq" event={"ID":"36db8322-5f0d-474e-baad-17e410c8c9f2","Type":"ContainerStarted","Data":"84b1b24d4e3257362d9453790dc80b4e57314ce6e6114db8d3f0048790cc2165"} Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.706297 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-59svq" podStartSLOduration=2.706279774 podStartE2EDuration="2.706279774s" podCreationTimestamp="2026-02-04 08:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:02.700116256 +0000 UTC m=+992.740174011" watchObservedRunningTime="2026-02-04 08:58:02.706279774 +0000 UTC m=+992.746337529" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.823377 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2ncgr"] Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.824468 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.833020 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2ncgr"] Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.952808 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.952855 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrvjg\" (UniqueName: \"kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.976598 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0b91-account-create-update-f46bp"] Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.978143 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.980320 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 04 08:58:02 crc kubenswrapper[4644]: I0204 08:58:02.987320 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0b91-account-create-update-f46bp"] Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.054158 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.054209 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrvjg\" (UniqueName: \"kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.054229 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.054268 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hccqb\" (UniqueName: \"kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.055299 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.078307 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrvjg\" (UniqueName: \"kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg\") pod \"keystone-db-create-2ncgr\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.155610 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccqb\" (UniqueName: \"kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.155793 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.156786 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.173585 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8sw9b"] Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.174657 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.182996 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.190366 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccqb\" (UniqueName: \"kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb\") pod \"keystone-0b91-account-create-update-f46bp\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.193914 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8sw9b"] Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.257526 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.257595 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8jz\" (UniqueName: \"kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.321431 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.326500 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4b99-account-create-update-8vkh4"] Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.327795 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.330586 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.347746 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b99-account-create-update-8vkh4"] Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.358686 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8jz\" (UniqueName: \"kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.358833 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.359433 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.388860 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8jz\" (UniqueName: \"kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz\") pod \"placement-db-create-8sw9b\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.463072 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.467892 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knnj6\" (UniqueName: \"kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.533124 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.570978 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knnj6\" (UniqueName: \"kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.571072 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.573430 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.590891 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knnj6\" (UniqueName: \"kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6\") pod \"placement-4b99-account-create-update-8vkh4\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:03 crc kubenswrapper[4644]: I0204 08:58:03.731417 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:04 crc kubenswrapper[4644]: I0204 08:58:04.891919 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:58:04 crc kubenswrapper[4644]: E0204 08:58:04.892094 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:58:04 crc kubenswrapper[4644]: E0204 08:58:04.892268 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:58:04 crc kubenswrapper[4644]: E0204 08:58:04.892316 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:58:12.892300782 +0000 UTC m=+1002.932358537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:58:04 crc kubenswrapper[4644]: I0204 08:58:04.986856 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gspgp" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.096438 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts\") pod \"336812a1-9d89-4471-a974-e04f21404612\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.096785 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2vvc\" (UniqueName: \"kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc\") pod \"336812a1-9d89-4471-a974-e04f21404612\" (UID: \"336812a1-9d89-4471-a974-e04f21404612\") " Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.097051 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "336812a1-9d89-4471-a974-e04f21404612" (UID: "336812a1-9d89-4471-a974-e04f21404612"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.097309 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/336812a1-9d89-4471-a974-e04f21404612-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.102545 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc" (OuterVolumeSpecName: "kube-api-access-w2vvc") pod "336812a1-9d89-4471-a974-e04f21404612" (UID: "336812a1-9d89-4471-a974-e04f21404612"). InnerVolumeSpecName "kube-api-access-w2vvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.198673 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vvc\" (UniqueName: \"kubernetes.io/projected/336812a1-9d89-4471-a974-e04f21404612-kube-api-access-w2vvc\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.555143 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.555191 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.555235 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.555733 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.555784 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703" gracePeriod=600 Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.684815 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gspgp" event={"ID":"336812a1-9d89-4471-a974-e04f21404612","Type":"ContainerDied","Data":"c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a"} Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.684871 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e2c207d97412951375160c2021771be9bcbc3772feada286c063240a1d936a" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.685144 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gspgp" Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.703987 4644 generic.go:334] "Generic (PLEG): container finished" podID="dbea064f-1305-4798-b660-7d3aa50cb6a2" containerID="86ee48f2731172d403f473ebbb3ba38a69dc40396ca65823bfbf5730cbeb18f2" exitCode=0 Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.704182 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7336-account-create-update-xwl7d" event={"ID":"dbea064f-1305-4798-b660-7d3aa50cb6a2","Type":"ContainerDied","Data":"86ee48f2731172d403f473ebbb3ba38a69dc40396ca65823bfbf5730cbeb18f2"} Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.837498 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:05 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:05 crc kubenswrapper[4644]: > Feb 04 08:58:05 crc kubenswrapper[4644]: I0204 08:58:05.943039 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.059116 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.059827 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-95k6h" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="dnsmasq-dns" containerID="cri-o://5a3215b2fed3b8118828da8d3431f626f6777f4283572dba9de14b487258df8c" gracePeriod=10 Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.718083 4644 generic.go:334] "Generic (PLEG): container finished" podID="b953e99a-7dbd-4300-95d5-844c241e3207" containerID="5a3215b2fed3b8118828da8d3431f626f6777f4283572dba9de14b487258df8c" exitCode=0 Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.718171 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-95k6h" event={"ID":"b953e99a-7dbd-4300-95d5-844c241e3207","Type":"ContainerDied","Data":"5a3215b2fed3b8118828da8d3431f626f6777f4283572dba9de14b487258df8c"} Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.720254 4644 generic.go:334] "Generic (PLEG): container finished" podID="36db8322-5f0d-474e-baad-17e410c8c9f2" containerID="84b1b24d4e3257362d9453790dc80b4e57314ce6e6114db8d3f0048790cc2165" exitCode=0 Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.720395 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59svq" event={"ID":"36db8322-5f0d-474e-baad-17e410c8c9f2","Type":"ContainerDied","Data":"84b1b24d4e3257362d9453790dc80b4e57314ce6e6114db8d3f0048790cc2165"} Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.725141 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703" exitCode=0 Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.725345 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703"} Feb 04 08:58:06 crc kubenswrapper[4644]: I0204 08:58:06.725378 4644 scope.go:117] "RemoveContainer" containerID="c81dc8963c853292a044170f0ee77ae242e3b6dd8a83fa571fd5d2427fd33119" Feb 04 08:58:07 crc kubenswrapper[4644]: I0204 08:58:07.154660 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.723860 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.731043 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59svq" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.783731 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7336-account-create-update-xwl7d" event={"ID":"dbea064f-1305-4798-b660-7d3aa50cb6a2","Type":"ContainerDied","Data":"ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c"} Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.783763 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe7b133a4a6b6dd8de0ac4c2f26f5030608aa5b23a5a1ddac90f11c7392c34c" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.783837 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7336-account-create-update-xwl7d" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.804696 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59svq" event={"ID":"36db8322-5f0d-474e-baad-17e410c8c9f2","Type":"ContainerDied","Data":"bde3d5f7d48bb1a2893c38582baba27a1b038e3b65e10c70a11c359ae78f8691"} Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.804740 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde3d5f7d48bb1a2893c38582baba27a1b038e3b65e10c70a11c359ae78f8691" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.804907 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59svq" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.885100 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v68xv\" (UniqueName: \"kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv\") pod \"dbea064f-1305-4798-b660-7d3aa50cb6a2\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.885284 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts\") pod \"36db8322-5f0d-474e-baad-17e410c8c9f2\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.885403 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts\") pod \"dbea064f-1305-4798-b660-7d3aa50cb6a2\" (UID: \"dbea064f-1305-4798-b660-7d3aa50cb6a2\") " Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.885444 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpqrg\" (UniqueName: \"kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg\") pod \"36db8322-5f0d-474e-baad-17e410c8c9f2\" (UID: \"36db8322-5f0d-474e-baad-17e410c8c9f2\") " Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.887143 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36db8322-5f0d-474e-baad-17e410c8c9f2" (UID: "36db8322-5f0d-474e-baad-17e410c8c9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.893647 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg" (OuterVolumeSpecName: "kube-api-access-qpqrg") pod "36db8322-5f0d-474e-baad-17e410c8c9f2" (UID: "36db8322-5f0d-474e-baad-17e410c8c9f2"). InnerVolumeSpecName "kube-api-access-qpqrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.908127 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbea064f-1305-4798-b660-7d3aa50cb6a2" (UID: "dbea064f-1305-4798-b660-7d3aa50cb6a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.913138 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv" (OuterVolumeSpecName: "kube-api-access-v68xv") pod "dbea064f-1305-4798-b660-7d3aa50cb6a2" (UID: "dbea064f-1305-4798-b660-7d3aa50cb6a2"). InnerVolumeSpecName "kube-api-access-v68xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.988027 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbea064f-1305-4798-b660-7d3aa50cb6a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.988061 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpqrg\" (UniqueName: \"kubernetes.io/projected/36db8322-5f0d-474e-baad-17e410c8c9f2-kube-api-access-qpqrg\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.988075 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v68xv\" (UniqueName: \"kubernetes.io/projected/dbea064f-1305-4798-b660-7d3aa50cb6a2-kube-api-access-v68xv\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:08 crc kubenswrapper[4644]: I0204 08:58:08.988088 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36db8322-5f0d-474e-baad-17e410c8c9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.280949 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.381395 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2ncgr"] Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.395890 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config\") pod \"b953e99a-7dbd-4300-95d5-844c241e3207\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.395968 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb\") pod \"b953e99a-7dbd-4300-95d5-844c241e3207\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.396085 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb\") pod \"b953e99a-7dbd-4300-95d5-844c241e3207\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.396161 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrbnc\" (UniqueName: \"kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc\") pod \"b953e99a-7dbd-4300-95d5-844c241e3207\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.396222 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc\") pod \"b953e99a-7dbd-4300-95d5-844c241e3207\" (UID: \"b953e99a-7dbd-4300-95d5-844c241e3207\") " Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.422765 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc" (OuterVolumeSpecName: "kube-api-access-qrbnc") pod "b953e99a-7dbd-4300-95d5-844c241e3207" (UID: "b953e99a-7dbd-4300-95d5-844c241e3207"). InnerVolumeSpecName "kube-api-access-qrbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.458286 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0b91-account-create-update-f46bp"] Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.481568 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b99-account-create-update-8vkh4"] Feb 04 08:58:09 crc kubenswrapper[4644]: W0204 08:58:09.495786 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26972986_ae99_445c_8cfa_ef894a5427c3.slice/crio-87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43 WatchSource:0}: Error finding container 87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43: Status 404 returned error can't find the container with id 87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43 Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.503103 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrbnc\" (UniqueName: \"kubernetes.io/projected/b953e99a-7dbd-4300-95d5-844c241e3207-kube-api-access-qrbnc\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.503446 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config" (OuterVolumeSpecName: "config") pod "b953e99a-7dbd-4300-95d5-844c241e3207" (UID: "b953e99a-7dbd-4300-95d5-844c241e3207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:09 crc kubenswrapper[4644]: W0204 08:58:09.514483 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93002c20_2499_4505_821e_a86b63dc5d97.slice/crio-81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07 WatchSource:0}: Error finding container 81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07: Status 404 returned error can't find the container with id 81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07 Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.516917 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b953e99a-7dbd-4300-95d5-844c241e3207" (UID: "b953e99a-7dbd-4300-95d5-844c241e3207"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.543809 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b953e99a-7dbd-4300-95d5-844c241e3207" (UID: "b953e99a-7dbd-4300-95d5-844c241e3207"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.551637 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b953e99a-7dbd-4300-95d5-844c241e3207" (UID: "b953e99a-7dbd-4300-95d5-844c241e3207"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.604665 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.604705 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.604719 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.604730 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b953e99a-7dbd-4300-95d5-844c241e3207-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.677000 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8sw9b"] Feb 04 08:58:09 crc kubenswrapper[4644]: W0204 08:58:09.677454 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod601e8461_e38b_4282_bdf4_2a8465e6623d.slice/crio-6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f WatchSource:0}: Error finding container 6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f: Status 404 returned error can't find the container with id 6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.812741 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b99-account-create-update-8vkh4" event={"ID":"93002c20-2499-4505-821e-a86b63dc5d97","Type":"ContainerStarted","Data":"81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.814479 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.815938 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0b91-account-create-update-f46bp" event={"ID":"26972986-ae99-445c-8cfa-ef894a5427c3","Type":"ContainerStarted","Data":"87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.817269 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ncgr" event={"ID":"e179b679-97af-4318-bdf1-07aedb5117a6","Type":"ContainerStarted","Data":"3f77adbf0d00a6c60cc9b6cab1aaf61ce678ad40d3cc7800d7504e457a86b8d4"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.818552 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8sw9b" event={"ID":"601e8461-e38b-4282-bdf4-2a8465e6623d","Type":"ContainerStarted","Data":"6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.820173 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-95k6h" event={"ID":"b953e99a-7dbd-4300-95d5-844c241e3207","Type":"ContainerDied","Data":"af9cad6c489266faf4335a3144da505ed67a4130312e255a9eb85495af4d30b0"} Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.820301 4644 scope.go:117] "RemoveContainer" containerID="5a3215b2fed3b8118828da8d3431f626f6777f4283572dba9de14b487258df8c" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.820236 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-95k6h" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.847634 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.897320 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:58:09 crc kubenswrapper[4644]: I0204 08:58:09.906280 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-95k6h"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471174 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k9sw7"] Feb 04 08:58:10 crc kubenswrapper[4644]: E0204 08:58:10.471559 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbea064f-1305-4798-b660-7d3aa50cb6a2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471573 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbea064f-1305-4798-b660-7d3aa50cb6a2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: E0204 08:58:10.471581 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336812a1-9d89-4471-a974-e04f21404612" containerName="mariadb-database-create" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471587 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="336812a1-9d89-4471-a974-e04f21404612" containerName="mariadb-database-create" Feb 04 08:58:10 crc kubenswrapper[4644]: E0204 08:58:10.471611 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36db8322-5f0d-474e-baad-17e410c8c9f2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471618 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="36db8322-5f0d-474e-baad-17e410c8c9f2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: E0204 08:58:10.471630 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="init" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471637 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="init" Feb 04 08:58:10 crc kubenswrapper[4644]: E0204 08:58:10.471649 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="dnsmasq-dns" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471655 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="dnsmasq-dns" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471812 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="336812a1-9d89-4471-a974-e04f21404612" containerName="mariadb-database-create" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471826 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbea064f-1305-4798-b660-7d3aa50cb6a2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471835 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="36db8322-5f0d-474e-baad-17e410c8c9f2" containerName="mariadb-account-create-update" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.471844 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" containerName="dnsmasq-dns" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.472367 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.492725 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k9sw7"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.523512 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.523750 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lr2x\" (UniqueName: \"kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.612700 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-b6d4b"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.613870 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.624970 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.625197 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.625250 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgq7\" (UniqueName: \"kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.625299 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lr2x\" (UniqueName: \"kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.626585 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.633879 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b6d4b"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.669714 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lr2x\" (UniqueName: \"kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x\") pod \"barbican-db-create-k9sw7\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.675419 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b953e99a-7dbd-4300-95d5-844c241e3207" path="/var/lib/kubelet/pods/b953e99a-7dbd-4300-95d5-844c241e3207/volumes" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.727453 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.727518 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgq7\" (UniqueName: \"kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.729051 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.803281 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgq7\" (UniqueName: \"kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7\") pod \"cinder-db-create-b6d4b\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.806239 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.829957 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0b91-account-create-update-f46bp" event={"ID":"26972986-ae99-445c-8cfa-ef894a5427c3","Type":"ContainerStarted","Data":"ff090da455a8744d4e5521ddad8de8fdafca5cf03e6f1011ae84346550cab925"} Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.831453 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ncgr" event={"ID":"e179b679-97af-4318-bdf1-07aedb5117a6","Type":"ContainerStarted","Data":"9659a9c8278a66f0e2139848dd1fad92993f0010801b721a43986571fa12ce22"} Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.832890 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8sw9b" event={"ID":"601e8461-e38b-4282-bdf4-2a8465e6623d","Type":"ContainerStarted","Data":"5d773e64a637cc3d11cda5d41d2669f4fd9bd396859ee5d9f123bf27bb062e40"} Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.834900 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b99-account-create-update-8vkh4" event={"ID":"93002c20-2499-4505-821e-a86b63dc5d97","Type":"ContainerStarted","Data":"6f35bbf13c80b1888ba9266546f5e4a1181b8c282f7162a9edcf970e9384efa8"} Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.866667 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9610-account-create-update-n5g8s"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.867664 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.871957 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.922618 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9610-account-create-update-n5g8s"] Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.931443 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765wd\" (UniqueName: \"kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.931503 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:10 crc kubenswrapper[4644]: I0204 08:58:10.954846 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.033489 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-867tt"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.035406 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.037274 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-765wd\" (UniqueName: \"kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.042223 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.044399 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.051229 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-2ncgr" podStartSLOduration=9.051209137 podStartE2EDuration="9.051209137s" podCreationTimestamp="2026-02-04 08:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:10.953910333 +0000 UTC m=+1000.993968088" watchObservedRunningTime="2026-02-04 08:58:11.051209137 +0000 UTC m=+1001.091266892" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.098556 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-867tt"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.109014 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-765wd\" (UniqueName: \"kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd\") pod \"cinder-9610-account-create-update-n5g8s\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.148067 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8sw9b" podStartSLOduration=8.148042328 podStartE2EDuration="8.148042328s" podCreationTimestamp="2026-02-04 08:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:11.056823851 +0000 UTC m=+1001.096881626" watchObservedRunningTime="2026-02-04 08:58:11.148042328 +0000 UTC m=+1001.188100073" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.150393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g76t\" (UniqueName: \"kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.150567 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.174642 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-38bd-account-create-update-jkslr"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.182740 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-38bd-account-create-update-jkslr"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.182870 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.191120 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.193441 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.206060 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0b91-account-create-update-f46bp" podStartSLOduration=9.206031646 podStartE2EDuration="9.206031646s" podCreationTimestamp="2026-02-04 08:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:11.106652385 +0000 UTC m=+1001.146710140" watchObservedRunningTime="2026-02-04 08:58:11.206031646 +0000 UTC m=+1001.246089401" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.256256 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.256341 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dsbq\" (UniqueName: \"kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.256412 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g76t\" (UniqueName: \"kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.256468 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.256971 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.304352 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g76t\" (UniqueName: \"kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t\") pod \"neutron-db-create-867tt\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.358173 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.358274 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dsbq\" (UniqueName: \"kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.359388 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.402892 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dsbq\" (UniqueName: \"kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq\") pod \"barbican-38bd-account-create-update-jkslr\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.410690 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-867tt" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.500110 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-daa1-account-create-update-5vttx"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.501474 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.505234 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.512376 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.543529 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daa1-account-create-update-5vttx"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.570453 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9sg2\" (UniqueName: \"kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.570640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.584145 4644 scope.go:117] "RemoveContainer" containerID="7f7bb30701847826ebefb75ed7a1ed7777fee763c2d3dbcd827a43fb497ee99c" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.672631 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.672816 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9sg2\" (UniqueName: \"kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.673604 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.691364 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9sg2\" (UniqueName: \"kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2\") pod \"neutron-daa1-account-create-update-5vttx\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.821092 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.859569 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-59svq"] Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.865563 4644 generic.go:334] "Generic (PLEG): container finished" podID="e179b679-97af-4318-bdf1-07aedb5117a6" containerID="9659a9c8278a66f0e2139848dd1fad92993f0010801b721a43986571fa12ce22" exitCode=0 Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.865632 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ncgr" event={"ID":"e179b679-97af-4318-bdf1-07aedb5117a6","Type":"ContainerDied","Data":"9659a9c8278a66f0e2139848dd1fad92993f0010801b721a43986571fa12ce22"} Feb 04 08:58:11 crc kubenswrapper[4644]: I0204 08:58:11.876401 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-59svq"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.004883 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4b99-account-create-update-8vkh4" podStartSLOduration=9.004862904 podStartE2EDuration="9.004862904s" podCreationTimestamp="2026-02-04 08:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:11.995779266 +0000 UTC m=+1002.035837031" watchObservedRunningTime="2026-02-04 08:58:12.004862904 +0000 UTC m=+1002.044920659" Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.198010 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:12 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:12 crc kubenswrapper[4644]: > Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.543314 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-38bd-account-create-update-jkslr"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.590302 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9610-account-create-update-n5g8s"] Feb 04 08:58:12 crc kubenswrapper[4644]: W0204 08:58:12.688250 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod675884a4_966e_4ca9_b279_e37202cab1d7.slice/crio-4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95 WatchSource:0}: Error finding container 4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95: Status 404 returned error can't find the container with id 4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95 Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.692802 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36db8322-5f0d-474e-baad-17e410c8c9f2" path="/var/lib/kubelet/pods/36db8322-5f0d-474e-baad-17e410c8c9f2/volumes" Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.712449 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-867tt"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.793997 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k9sw7"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.834307 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daa1-account-create-update-5vttx"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.911105 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:58:12 crc kubenswrapper[4644]: E0204 08:58:12.911840 4644 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 08:58:12 crc kubenswrapper[4644]: E0204 08:58:12.911861 4644 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 08:58:12 crc kubenswrapper[4644]: E0204 08:58:12.911909 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift podName:1344aa43-93ef-4780-a56d-3eb89d55b1a2 nodeName:}" failed. No retries permitted until 2026-02-04 08:58:28.911887226 +0000 UTC m=+1018.951944981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift") pod "swift-storage-0" (UID: "1344aa43-93ef-4780-a56d-3eb89d55b1a2") : configmap "swift-ring-files" not found Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.963471 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b6d4b"] Feb 04 08:58:12 crc kubenswrapper[4644]: I0204 08:58:12.996751 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-38bd-account-create-update-jkslr" event={"ID":"86fcf6d5-331b-4a6e-b65c-3c68d28feb65","Type":"ContainerStarted","Data":"a3465572737411078b1a6ac33af7190ddc51cc1711288e029a46a4985f66ea7c"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.001772 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-867tt" event={"ID":"a17cf99e-6023-4563-9513-f5418f4a252b","Type":"ContainerStarted","Data":"5c1ea07021d8bf90e50a4933cb371a66c302fad68e1a136ea170b50107079584"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.009678 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9610-account-create-update-n5g8s" event={"ID":"675884a4-966e-4ca9-b279-e37202cab1d7","Type":"ContainerStarted","Data":"4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.022187 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa1-account-create-update-5vttx" event={"ID":"95fbd688-db27-4267-aa54-c9c90a1b19ab","Type":"ContainerStarted","Data":"f0725d38f87d4225ff2e80453edca50b382f3153cac3523c9ec7957e155af29b"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.033293 4644 generic.go:334] "Generic (PLEG): container finished" podID="601e8461-e38b-4282-bdf4-2a8465e6623d" containerID="5d773e64a637cc3d11cda5d41d2669f4fd9bd396859ee5d9f123bf27bb062e40" exitCode=0 Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.033385 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8sw9b" event={"ID":"601e8461-e38b-4282-bdf4-2a8465e6623d","Type":"ContainerDied","Data":"5d773e64a637cc3d11cda5d41d2669f4fd9bd396859ee5d9f123bf27bb062e40"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.050073 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sltjx" event={"ID":"5a843b53-7ea4-48d9-9c8a-16be734d66c6","Type":"ContainerStarted","Data":"d1953f7ef02af72fdd95a4d7305fab9fa31dd6e6bd4d7d62a1a636418068f077"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.054709 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9sw7" event={"ID":"9183f642-1a40-4b25-93d6-0835b34764c1","Type":"ContainerStarted","Data":"3495d760baaf0679ca6ec7d47a0f1879f93c9139680fbf9394073204a4cc27ae"} Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.092910 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sltjx" podStartSLOduration=2.661312301 podStartE2EDuration="16.092883862s" podCreationTimestamp="2026-02-04 08:57:57 +0000 UTC" firstStartedPulling="2026-02-04 08:57:58.315303501 +0000 UTC m=+988.355361256" lastFinishedPulling="2026-02-04 08:58:11.746875062 +0000 UTC m=+1001.786932817" observedRunningTime="2026-02-04 08:58:13.080974485 +0000 UTC m=+1003.121032240" watchObservedRunningTime="2026-02-04 08:58:13.092883862 +0000 UTC m=+1003.132941617" Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.449132 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.626807 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts\") pod \"e179b679-97af-4318-bdf1-07aedb5117a6\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.627084 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrvjg\" (UniqueName: \"kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg\") pod \"e179b679-97af-4318-bdf1-07aedb5117a6\" (UID: \"e179b679-97af-4318-bdf1-07aedb5117a6\") " Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.627483 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e179b679-97af-4318-bdf1-07aedb5117a6" (UID: "e179b679-97af-4318-bdf1-07aedb5117a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.628060 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e179b679-97af-4318-bdf1-07aedb5117a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.635569 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg" (OuterVolumeSpecName: "kube-api-access-zrvjg") pod "e179b679-97af-4318-bdf1-07aedb5117a6" (UID: "e179b679-97af-4318-bdf1-07aedb5117a6"). InnerVolumeSpecName "kube-api-access-zrvjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:13 crc kubenswrapper[4644]: I0204 08:58:13.730304 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrvjg\" (UniqueName: \"kubernetes.io/projected/e179b679-97af-4318-bdf1-07aedb5117a6-kube-api-access-zrvjg\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.079837 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-38bd-account-create-update-jkslr" event={"ID":"86fcf6d5-331b-4a6e-b65c-3c68d28feb65","Type":"ContainerStarted","Data":"fa75280a8573d2a22be14b02d722c1fbc7505713694c4335be19c2cf46b46498"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.082033 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-867tt" event={"ID":"a17cf99e-6023-4563-9513-f5418f4a252b","Type":"ContainerStarted","Data":"8c5204b4112bbeac34302e484ab4714a6f731942b1871d6a182cae93c49f2035"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.084306 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9610-account-create-update-n5g8s" event={"ID":"675884a4-966e-4ca9-b279-e37202cab1d7","Type":"ContainerStarted","Data":"073c3cb69189606e749e4e677cd0448d889c69f04a62e52efd6e47ccdb7c738d"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.086208 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa1-account-create-update-5vttx" event={"ID":"95fbd688-db27-4267-aa54-c9c90a1b19ab","Type":"ContainerStarted","Data":"ab907603f08ccd9cea4238caecdddb1f64bf1ba7180a79313a9ae6d14d78d4a9"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.088625 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ncgr" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.088787 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ncgr" event={"ID":"e179b679-97af-4318-bdf1-07aedb5117a6","Type":"ContainerDied","Data":"3f77adbf0d00a6c60cc9b6cab1aaf61ce678ad40d3cc7800d7504e457a86b8d4"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.088872 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f77adbf0d00a6c60cc9b6cab1aaf61ce678ad40d3cc7800d7504e457a86b8d4" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.089999 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6d4b" event={"ID":"18e102f9-a169-43ec-bc1d-de48e8b59376","Type":"ContainerStarted","Data":"032e73a144ee913df2be083403248060f67dcab80e539c86ce3751244569299e"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.090024 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6d4b" event={"ID":"18e102f9-a169-43ec-bc1d-de48e8b59376","Type":"ContainerStarted","Data":"6c10c4d8696d95785728e1edaef8e118a007c55dd798e8c57a37afcf291b263b"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.096583 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9sw7" event={"ID":"9183f642-1a40-4b25-93d6-0835b34764c1","Type":"ContainerStarted","Data":"11d34483e855f0e997f4ebbd3c9ecc5a8f37e501222216c42ffaec1f064a9986"} Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.104020 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-38bd-account-create-update-jkslr" podStartSLOduration=3.103996692 podStartE2EDuration="3.103996692s" podCreationTimestamp="2026-02-04 08:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.098521403 +0000 UTC m=+1004.138579168" watchObservedRunningTime="2026-02-04 08:58:14.103996692 +0000 UTC m=+1004.144054447" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.124151 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-867tt" podStartSLOduration=4.124127343 podStartE2EDuration="4.124127343s" podCreationTimestamp="2026-02-04 08:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.12071579 +0000 UTC m=+1004.160773555" watchObservedRunningTime="2026-02-04 08:58:14.124127343 +0000 UTC m=+1004.164185108" Feb 04 08:58:14 crc kubenswrapper[4644]: E0204 08:58:14.128911 4644 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26972986_ae99_445c_8cfa_ef894a5427c3.slice/crio-ff090da455a8744d4e5521ddad8de8fdafca5cf03e6f1011ae84346550cab925.scope\": RecentStats: unable to find data in memory cache]" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.142274 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9610-account-create-update-n5g8s" podStartSLOduration=4.142250289 podStartE2EDuration="4.142250289s" podCreationTimestamp="2026-02-04 08:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.136024889 +0000 UTC m=+1004.176082654" watchObservedRunningTime="2026-02-04 08:58:14.142250289 +0000 UTC m=+1004.182308044" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.228860 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-k9sw7" podStartSLOduration=4.22884061 podStartE2EDuration="4.22884061s" podCreationTimestamp="2026-02-04 08:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.159960145 +0000 UTC m=+1004.200017900" watchObservedRunningTime="2026-02-04 08:58:14.22884061 +0000 UTC m=+1004.268898365" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.234593 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-b6d4b" podStartSLOduration=4.234571867 podStartE2EDuration="4.234571867s" podCreationTimestamp="2026-02-04 08:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.204814342 +0000 UTC m=+1004.244872097" watchObservedRunningTime="2026-02-04 08:58:14.234571867 +0000 UTC m=+1004.274629622" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.261276 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-daa1-account-create-update-5vttx" podStartSLOduration=3.261255618 podStartE2EDuration="3.261255618s" podCreationTimestamp="2026-02-04 08:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:14.251274264 +0000 UTC m=+1004.291332019" watchObservedRunningTime="2026-02-04 08:58:14.261255618 +0000 UTC m=+1004.301313373" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.338680 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6dczk"] Feb 04 08:58:14 crc kubenswrapper[4644]: E0204 08:58:14.341176 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179b679-97af-4318-bdf1-07aedb5117a6" containerName="mariadb-database-create" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.341201 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179b679-97af-4318-bdf1-07aedb5117a6" containerName="mariadb-database-create" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.341430 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="e179b679-97af-4318-bdf1-07aedb5117a6" containerName="mariadb-database-create" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.342155 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.344447 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bh6g5" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.346261 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.349055 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6dczk"] Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.445709 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92slt\" (UniqueName: \"kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.445786 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.445821 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.445856 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.547476 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92slt\" (UniqueName: \"kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.547583 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.547622 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.547664 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.554864 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.556845 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.562064 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.579909 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92slt\" (UniqueName: \"kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt\") pod \"glance-db-sync-6dczk\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.659660 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.663906 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6dczk" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.860496 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts\") pod \"601e8461-e38b-4282-bdf4-2a8465e6623d\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.860897 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz8jz\" (UniqueName: \"kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz\") pod \"601e8461-e38b-4282-bdf4-2a8465e6623d\" (UID: \"601e8461-e38b-4282-bdf4-2a8465e6623d\") " Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.862429 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "601e8461-e38b-4282-bdf4-2a8465e6623d" (UID: "601e8461-e38b-4282-bdf4-2a8465e6623d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.867540 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz" (OuterVolumeSpecName: "kube-api-access-bz8jz") pod "601e8461-e38b-4282-bdf4-2a8465e6623d" (UID: "601e8461-e38b-4282-bdf4-2a8465e6623d"). InnerVolumeSpecName "kube-api-access-bz8jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.966143 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz8jz\" (UniqueName: \"kubernetes.io/projected/601e8461-e38b-4282-bdf4-2a8465e6623d-kube-api-access-bz8jz\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:14 crc kubenswrapper[4644]: I0204 08:58:14.966271 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601e8461-e38b-4282-bdf4-2a8465e6623d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.119425 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8sw9b" event={"ID":"601e8461-e38b-4282-bdf4-2a8465e6623d","Type":"ContainerDied","Data":"6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f"} Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.119800 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6661f743ab7bba915d388bd5a3b87f767e662e825f63710e81855fe4cdf8e57f" Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.119459 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8sw9b" Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.125283 4644 generic.go:334] "Generic (PLEG): container finished" podID="93002c20-2499-4505-821e-a86b63dc5d97" containerID="6f35bbf13c80b1888ba9266546f5e4a1181b8c282f7162a9edcf970e9384efa8" exitCode=0 Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.125363 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b99-account-create-update-8vkh4" event={"ID":"93002c20-2499-4505-821e-a86b63dc5d97","Type":"ContainerDied","Data":"6f35bbf13c80b1888ba9266546f5e4a1181b8c282f7162a9edcf970e9384efa8"} Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.130608 4644 generic.go:334] "Generic (PLEG): container finished" podID="26972986-ae99-445c-8cfa-ef894a5427c3" containerID="ff090da455a8744d4e5521ddad8de8fdafca5cf03e6f1011ae84346550cab925" exitCode=0 Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.131639 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0b91-account-create-update-f46bp" event={"ID":"26972986-ae99-445c-8cfa-ef894a5427c3","Type":"ContainerDied","Data":"ff090da455a8744d4e5521ddad8de8fdafca5cf03e6f1011ae84346550cab925"} Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.413138 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6dczk"] Feb 04 08:58:15 crc kubenswrapper[4644]: I0204 08:58:15.864878 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:15 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:15 crc kubenswrapper[4644]: > Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.141081 4644 generic.go:334] "Generic (PLEG): container finished" podID="a17cf99e-6023-4563-9513-f5418f4a252b" containerID="8c5204b4112bbeac34302e484ab4714a6f731942b1871d6a182cae93c49f2035" exitCode=0 Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.141163 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-867tt" event={"ID":"a17cf99e-6023-4563-9513-f5418f4a252b","Type":"ContainerDied","Data":"8c5204b4112bbeac34302e484ab4714a6f731942b1871d6a182cae93c49f2035"} Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.143088 4644 generic.go:334] "Generic (PLEG): container finished" podID="18e102f9-a169-43ec-bc1d-de48e8b59376" containerID="032e73a144ee913df2be083403248060f67dcab80e539c86ce3751244569299e" exitCode=0 Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.143126 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6d4b" event={"ID":"18e102f9-a169-43ec-bc1d-de48e8b59376","Type":"ContainerDied","Data":"032e73a144ee913df2be083403248060f67dcab80e539c86ce3751244569299e"} Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.144195 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6dczk" event={"ID":"f1573f43-1a60-4b32-8286-02fb06f9d3a8","Type":"ContainerStarted","Data":"604e72f9452cbebd32debfcc658c3eb8df0eaa80a0418a24c320b0371f38354c"} Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.146103 4644 generic.go:334] "Generic (PLEG): container finished" podID="9183f642-1a40-4b25-93d6-0835b34764c1" containerID="11d34483e855f0e997f4ebbd3c9ecc5a8f37e501222216c42ffaec1f064a9986" exitCode=0 Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.146245 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9sw7" event={"ID":"9183f642-1a40-4b25-93d6-0835b34764c1","Type":"ContainerDied","Data":"11d34483e855f0e997f4ebbd3c9ecc5a8f37e501222216c42ffaec1f064a9986"} Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.594246 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.600021 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.791844 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts\") pod \"93002c20-2499-4505-821e-a86b63dc5d97\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.792172 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hccqb\" (UniqueName: \"kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb\") pod \"26972986-ae99-445c-8cfa-ef894a5427c3\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.792792 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93002c20-2499-4505-821e-a86b63dc5d97" (UID: "93002c20-2499-4505-821e-a86b63dc5d97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.792966 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26972986-ae99-445c-8cfa-ef894a5427c3" (UID: "26972986-ae99-445c-8cfa-ef894a5427c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.793354 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts\") pod \"26972986-ae99-445c-8cfa-ef894a5427c3\" (UID: \"26972986-ae99-445c-8cfa-ef894a5427c3\") " Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.793582 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knnj6\" (UniqueName: \"kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6\") pod \"93002c20-2499-4505-821e-a86b63dc5d97\" (UID: \"93002c20-2499-4505-821e-a86b63dc5d97\") " Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.794678 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93002c20-2499-4505-821e-a86b63dc5d97-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.794807 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26972986-ae99-445c-8cfa-ef894a5427c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.798580 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6" (OuterVolumeSpecName: "kube-api-access-knnj6") pod "93002c20-2499-4505-821e-a86b63dc5d97" (UID: "93002c20-2499-4505-821e-a86b63dc5d97"). InnerVolumeSpecName "kube-api-access-knnj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.800424 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb" (OuterVolumeSpecName: "kube-api-access-hccqb") pod "26972986-ae99-445c-8cfa-ef894a5427c3" (UID: "26972986-ae99-445c-8cfa-ef894a5427c3"). InnerVolumeSpecName "kube-api-access-hccqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868145 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wt9t7"] Feb 04 08:58:16 crc kubenswrapper[4644]: E0204 08:58:16.868558 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93002c20-2499-4505-821e-a86b63dc5d97" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868581 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="93002c20-2499-4505-821e-a86b63dc5d97" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: E0204 08:58:16.868616 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601e8461-e38b-4282-bdf4-2a8465e6623d" containerName="mariadb-database-create" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868623 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="601e8461-e38b-4282-bdf4-2a8465e6623d" containerName="mariadb-database-create" Feb 04 08:58:16 crc kubenswrapper[4644]: E0204 08:58:16.868636 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26972986-ae99-445c-8cfa-ef894a5427c3" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868642 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="26972986-ae99-445c-8cfa-ef894a5427c3" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868827 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="26972986-ae99-445c-8cfa-ef894a5427c3" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868858 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="93002c20-2499-4505-821e-a86b63dc5d97" containerName="mariadb-account-create-update" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.868868 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="601e8461-e38b-4282-bdf4-2a8465e6623d" containerName="mariadb-database-create" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.878162 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wt9t7"] Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.878621 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.882185 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.896102 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cqd\" (UniqueName: \"kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.896238 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.896363 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knnj6\" (UniqueName: \"kubernetes.io/projected/93002c20-2499-4505-821e-a86b63dc5d97-kube-api-access-knnj6\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.896384 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hccqb\" (UniqueName: \"kubernetes.io/projected/26972986-ae99-445c-8cfa-ef894a5427c3-kube-api-access-hccqb\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.998360 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cqd\" (UniqueName: \"kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.998491 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:16 crc kubenswrapper[4644]: I0204 08:58:16.999404 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.042907 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cqd\" (UniqueName: \"kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd\") pod \"root-account-create-update-wt9t7\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.157122 4644 generic.go:334] "Generic (PLEG): container finished" podID="95fbd688-db27-4267-aa54-c9c90a1b19ab" containerID="ab907603f08ccd9cea4238caecdddb1f64bf1ba7180a79313a9ae6d14d78d4a9" exitCode=0 Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.157194 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa1-account-create-update-5vttx" event={"ID":"95fbd688-db27-4267-aa54-c9c90a1b19ab","Type":"ContainerDied","Data":"ab907603f08ccd9cea4238caecdddb1f64bf1ba7180a79313a9ae6d14d78d4a9"} Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.158745 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b99-account-create-update-8vkh4" event={"ID":"93002c20-2499-4505-821e-a86b63dc5d97","Type":"ContainerDied","Data":"81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07"} Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.158766 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81d2b9555fc50204c0a0202485eb6b38c747988031787ede7b73d10b3919bf07" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.158804 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b99-account-create-update-8vkh4" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.160282 4644 generic.go:334] "Generic (PLEG): container finished" podID="86fcf6d5-331b-4a6e-b65c-3c68d28feb65" containerID="fa75280a8573d2a22be14b02d722c1fbc7505713694c4335be19c2cf46b46498" exitCode=0 Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.160318 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-38bd-account-create-update-jkslr" event={"ID":"86fcf6d5-331b-4a6e-b65c-3c68d28feb65","Type":"ContainerDied","Data":"fa75280a8573d2a22be14b02d722c1fbc7505713694c4335be19c2cf46b46498"} Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.162206 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0b91-account-create-update-f46bp" event={"ID":"26972986-ae99-445c-8cfa-ef894a5427c3","Type":"ContainerDied","Data":"87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43"} Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.162226 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87a12fd812f2a2015325bc2050a0c490858b3d85176c068bd992f30775ce4d43" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.162264 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0b91-account-create-update-f46bp" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.164219 4644 generic.go:334] "Generic (PLEG): container finished" podID="675884a4-966e-4ca9-b279-e37202cab1d7" containerID="073c3cb69189606e749e4e677cd0448d889c69f04a62e52efd6e47ccdb7c738d" exitCode=0 Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.164898 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9610-account-create-update-n5g8s" event={"ID":"675884a4-966e-4ca9-b279-e37202cab1d7","Type":"ContainerDied","Data":"073c3cb69189606e749e4e677cd0448d889c69f04a62e52efd6e47ccdb7c738d"} Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.202993 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.738970 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-867tt" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.918591 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts\") pod \"a17cf99e-6023-4563-9513-f5418f4a252b\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.918721 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g76t\" (UniqueName: \"kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t\") pod \"a17cf99e-6023-4563-9513-f5418f4a252b\" (UID: \"a17cf99e-6023-4563-9513-f5418f4a252b\") " Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.920973 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a17cf99e-6023-4563-9513-f5418f4a252b" (UID: "a17cf99e-6023-4563-9513-f5418f4a252b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.933977 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.934989 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t" (OuterVolumeSpecName: "kube-api-access-4g76t") pod "a17cf99e-6023-4563-9513-f5418f4a252b" (UID: "a17cf99e-6023-4563-9513-f5418f4a252b"). InnerVolumeSpecName "kube-api-access-4g76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:17 crc kubenswrapper[4644]: I0204 08:58:17.937777 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.020653 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17cf99e-6023-4563-9513-f5418f4a252b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.020693 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g76t\" (UniqueName: \"kubernetes.io/projected/a17cf99e-6023-4563-9513-f5418f4a252b-kube-api-access-4g76t\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.100878 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wt9t7"] Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.123899 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lr2x\" (UniqueName: \"kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x\") pod \"9183f642-1a40-4b25-93d6-0835b34764c1\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.123999 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qgq7\" (UniqueName: \"kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7\") pod \"18e102f9-a169-43ec-bc1d-de48e8b59376\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.124148 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts\") pod \"18e102f9-a169-43ec-bc1d-de48e8b59376\" (UID: \"18e102f9-a169-43ec-bc1d-de48e8b59376\") " Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.124207 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts\") pod \"9183f642-1a40-4b25-93d6-0835b34764c1\" (UID: \"9183f642-1a40-4b25-93d6-0835b34764c1\") " Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.125178 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9183f642-1a40-4b25-93d6-0835b34764c1" (UID: "9183f642-1a40-4b25-93d6-0835b34764c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.125612 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18e102f9-a169-43ec-bc1d-de48e8b59376" (UID: "18e102f9-a169-43ec-bc1d-de48e8b59376"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:18 crc kubenswrapper[4644]: W0204 08:58:18.126666 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2f0296_f7ca_4c5d_bbf0_d77692ee814b.slice/crio-f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857 WatchSource:0}: Error finding container f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857: Status 404 returned error can't find the container with id f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857 Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.129058 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x" (OuterVolumeSpecName: "kube-api-access-8lr2x") pod "9183f642-1a40-4b25-93d6-0835b34764c1" (UID: "9183f642-1a40-4b25-93d6-0835b34764c1"). InnerVolumeSpecName "kube-api-access-8lr2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.140127 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7" (OuterVolumeSpecName: "kube-api-access-2qgq7") pod "18e102f9-a169-43ec-bc1d-de48e8b59376" (UID: "18e102f9-a169-43ec-bc1d-de48e8b59376"). InnerVolumeSpecName "kube-api-access-2qgq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.197510 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-867tt" event={"ID":"a17cf99e-6023-4563-9513-f5418f4a252b","Type":"ContainerDied","Data":"5c1ea07021d8bf90e50a4933cb371a66c302fad68e1a136ea170b50107079584"} Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.197549 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1ea07021d8bf90e50a4933cb371a66c302fad68e1a136ea170b50107079584" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.197613 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-867tt" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.209574 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wt9t7" event={"ID":"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b","Type":"ContainerStarted","Data":"f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857"} Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.222822 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6d4b" event={"ID":"18e102f9-a169-43ec-bc1d-de48e8b59376","Type":"ContainerDied","Data":"6c10c4d8696d95785728e1edaef8e118a007c55dd798e8c57a37afcf291b263b"} Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.222862 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c10c4d8696d95785728e1edaef8e118a007c55dd798e8c57a37afcf291b263b" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.222934 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6d4b" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.230951 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18e102f9-a169-43ec-bc1d-de48e8b59376-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.230982 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9183f642-1a40-4b25-93d6-0835b34764c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.230995 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lr2x\" (UniqueName: \"kubernetes.io/projected/9183f642-1a40-4b25-93d6-0835b34764c1-kube-api-access-8lr2x\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.231007 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qgq7\" (UniqueName: \"kubernetes.io/projected/18e102f9-a169-43ec-bc1d-de48e8b59376-kube-api-access-2qgq7\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.258092 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9sw7" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.258556 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9sw7" event={"ID":"9183f642-1a40-4b25-93d6-0835b34764c1","Type":"ContainerDied","Data":"3495d760baaf0679ca6ec7d47a0f1879f93c9139680fbf9394073204a4cc27ae"} Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.258581 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3495d760baaf0679ca6ec7d47a0f1879f93c9139680fbf9394073204a4cc27ae" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.864035 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.941967 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f4mlq"] Feb 04 08:58:18 crc kubenswrapper[4644]: E0204 08:58:18.942304 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9183f642-1a40-4b25-93d6-0835b34764c1" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.942316 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9183f642-1a40-4b25-93d6-0835b34764c1" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: E0204 08:58:18.947825 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17cf99e-6023-4563-9513-f5418f4a252b" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.947859 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17cf99e-6023-4563-9513-f5418f4a252b" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: E0204 08:58:18.947885 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fbd688-db27-4267-aa54-c9c90a1b19ab" containerName="mariadb-account-create-update" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.947892 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fbd688-db27-4267-aa54-c9c90a1b19ab" containerName="mariadb-account-create-update" Feb 04 08:58:18 crc kubenswrapper[4644]: E0204 08:58:18.947920 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e102f9-a169-43ec-bc1d-de48e8b59376" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.947927 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e102f9-a169-43ec-bc1d-de48e8b59376" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.948256 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e102f9-a169-43ec-bc1d-de48e8b59376" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.948273 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17cf99e-6023-4563-9513-f5418f4a252b" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.948288 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fbd688-db27-4267-aa54-c9c90a1b19ab" containerName="mariadb-account-create-update" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.948300 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="9183f642-1a40-4b25-93d6-0835b34764c1" containerName="mariadb-database-create" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.950431 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.957462 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.957525 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.961290 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.961404 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4x467" Feb 04 08:58:18 crc kubenswrapper[4644]: I0204 08:58:18.970162 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4mlq"] Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.040009 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057049 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9sg2\" (UniqueName: \"kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2\") pod \"95fbd688-db27-4267-aa54-c9c90a1b19ab\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057616 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dsbq\" (UniqueName: \"kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq\") pod \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057677 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts\") pod \"95fbd688-db27-4267-aa54-c9c90a1b19ab\" (UID: \"95fbd688-db27-4267-aa54-c9c90a1b19ab\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057829 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057914 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9mr\" (UniqueName: \"kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.057953 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.059426 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95fbd688-db27-4267-aa54-c9c90a1b19ab" (UID: "95fbd688-db27-4267-aa54-c9c90a1b19ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.066915 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq" (OuterVolumeSpecName: "kube-api-access-4dsbq") pod "86fcf6d5-331b-4a6e-b65c-3c68d28feb65" (UID: "86fcf6d5-331b-4a6e-b65c-3c68d28feb65"). InnerVolumeSpecName "kube-api-access-4dsbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.071041 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.075453 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2" (OuterVolumeSpecName: "kube-api-access-q9sg2") pod "95fbd688-db27-4267-aa54-c9c90a1b19ab" (UID: "95fbd688-db27-4267-aa54-c9c90a1b19ab"). InnerVolumeSpecName "kube-api-access-q9sg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159238 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-765wd\" (UniqueName: \"kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd\") pod \"675884a4-966e-4ca9-b279-e37202cab1d7\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159337 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts\") pod \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\" (UID: \"86fcf6d5-331b-4a6e-b65c-3c68d28feb65\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159394 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts\") pod \"675884a4-966e-4ca9-b279-e37202cab1d7\" (UID: \"675884a4-966e-4ca9-b279-e37202cab1d7\") " Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159552 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159600 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9mr\" (UniqueName: \"kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159626 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159679 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dsbq\" (UniqueName: \"kubernetes.io/projected/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-kube-api-access-4dsbq\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159692 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fbd688-db27-4267-aa54-c9c90a1b19ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.159784 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9sg2\" (UniqueName: \"kubernetes.io/projected/95fbd688-db27-4267-aa54-c9c90a1b19ab-kube-api-access-q9sg2\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.160563 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "675884a4-966e-4ca9-b279-e37202cab1d7" (UID: "675884a4-966e-4ca9-b279-e37202cab1d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.161299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86fcf6d5-331b-4a6e-b65c-3c68d28feb65" (UID: "86fcf6d5-331b-4a6e-b65c-3c68d28feb65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.164301 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd" (OuterVolumeSpecName: "kube-api-access-765wd") pod "675884a4-966e-4ca9-b279-e37202cab1d7" (UID: "675884a4-966e-4ca9-b279-e37202cab1d7"). InnerVolumeSpecName "kube-api-access-765wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.164416 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.175458 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.179682 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9mr\" (UniqueName: \"kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr\") pod \"keystone-db-sync-f4mlq\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.262450 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fcf6d5-331b-4a6e-b65c-3c68d28feb65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.262488 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/675884a4-966e-4ca9-b279-e37202cab1d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.262498 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-765wd\" (UniqueName: \"kubernetes.io/projected/675884a4-966e-4ca9-b279-e37202cab1d7-kube-api-access-765wd\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.275806 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-38bd-account-create-update-jkslr" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.276550 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-38bd-account-create-update-jkslr" event={"ID":"86fcf6d5-331b-4a6e-b65c-3c68d28feb65","Type":"ContainerDied","Data":"a3465572737411078b1a6ac33af7190ddc51cc1711288e029a46a4985f66ea7c"} Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.276595 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3465572737411078b1a6ac33af7190ddc51cc1711288e029a46a4985f66ea7c" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.279081 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9610-account-create-update-n5g8s" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.279126 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9610-account-create-update-n5g8s" event={"ID":"675884a4-966e-4ca9-b279-e37202cab1d7","Type":"ContainerDied","Data":"4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95"} Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.279177 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9d4766864369c72c758e45a153f03413e17d231f662791c302a1abf7be3a95" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.281687 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wt9t7" event={"ID":"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b","Type":"ContainerStarted","Data":"bdb3c12ec6143ceda9b880281d7c87f98796f2ff0a73ffdc4fa2674590e0fdc1"} Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.286749 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daa1-account-create-update-5vttx" event={"ID":"95fbd688-db27-4267-aa54-c9c90a1b19ab","Type":"ContainerDied","Data":"f0725d38f87d4225ff2e80453edca50b382f3153cac3523c9ec7957e155af29b"} Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.286780 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0725d38f87d4225ff2e80453edca50b382f3153cac3523c9ec7957e155af29b" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.286851 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daa1-account-create-update-5vttx" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.302934 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wt9t7" podStartSLOduration=3.302916524 podStartE2EDuration="3.302916524s" podCreationTimestamp="2026-02-04 08:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:58:19.298751101 +0000 UTC m=+1009.338808856" watchObservedRunningTime="2026-02-04 08:58:19.302916524 +0000 UTC m=+1009.342974279" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.361766 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:58:19 crc kubenswrapper[4644]: I0204 08:58:19.900226 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4mlq"] Feb 04 08:58:20 crc kubenswrapper[4644]: I0204 08:58:20.295857 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4mlq" event={"ID":"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31","Type":"ContainerStarted","Data":"86b2413aa2f9a5d51268966e135d31cacabfa253a39406c12b2959ca648dccd1"} Feb 04 08:58:20 crc kubenswrapper[4644]: I0204 08:58:20.302280 4644 generic.go:334] "Generic (PLEG): container finished" podID="9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" containerID="bdb3c12ec6143ceda9b880281d7c87f98796f2ff0a73ffdc4fa2674590e0fdc1" exitCode=0 Feb 04 08:58:20 crc kubenswrapper[4644]: I0204 08:58:20.302363 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wt9t7" event={"ID":"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b","Type":"ContainerDied","Data":"bdb3c12ec6143ceda9b880281d7c87f98796f2ff0a73ffdc4fa2674590e0fdc1"} Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.690133 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.826583 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4cqd\" (UniqueName: \"kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd\") pod \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.826753 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts\") pod \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\" (UID: \"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b\") " Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.827304 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" (UID: "9c2f0296-f7ca-4c5d-bbf0-d77692ee814b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.837071 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd" (OuterVolumeSpecName: "kube-api-access-q4cqd") pod "9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" (UID: "9c2f0296-f7ca-4c5d-bbf0-d77692ee814b"). InnerVolumeSpecName "kube-api-access-q4cqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.930653 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4cqd\" (UniqueName: \"kubernetes.io/projected/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-kube-api-access-q4cqd\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:21 crc kubenswrapper[4644]: I0204 08:58:21.931000 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:22 crc kubenswrapper[4644]: I0204 08:58:22.102551 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:22 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:22 crc kubenswrapper[4644]: > Feb 04 08:58:22 crc kubenswrapper[4644]: I0204 08:58:22.334823 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wt9t7" event={"ID":"9c2f0296-f7ca-4c5d-bbf0-d77692ee814b","Type":"ContainerDied","Data":"f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857"} Feb 04 08:58:22 crc kubenswrapper[4644]: I0204 08:58:22.334869 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f557aeb430f8c205a38e50277c25b171cd1bda8ccffc4d644d9e207b72ded857" Feb 04 08:58:22 crc kubenswrapper[4644]: I0204 08:58:22.334955 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wt9t7" Feb 04 08:58:25 crc kubenswrapper[4644]: I0204 08:58:25.842867 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:25 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:25 crc kubenswrapper[4644]: > Feb 04 08:58:26 crc kubenswrapper[4644]: I0204 08:58:26.377011 4644 generic.go:334] "Generic (PLEG): container finished" podID="5a843b53-7ea4-48d9-9c8a-16be734d66c6" containerID="d1953f7ef02af72fdd95a4d7305fab9fa31dd6e6bd4d7d62a1a636418068f077" exitCode=0 Feb 04 08:58:26 crc kubenswrapper[4644]: I0204 08:58:26.377093 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sltjx" event={"ID":"5a843b53-7ea4-48d9-9c8a-16be734d66c6","Type":"ContainerDied","Data":"d1953f7ef02af72fdd95a4d7305fab9fa31dd6e6bd4d7d62a1a636418068f077"} Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.129369 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:58:28 crc kubenswrapper[4644]: E0204 08:58:28.129973 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fcf6d5-331b-4a6e-b65c-3c68d28feb65" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.129986 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fcf6d5-331b-4a6e-b65c-3c68d28feb65" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: E0204 08:58:28.130003 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675884a4-966e-4ca9-b279-e37202cab1d7" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.130008 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="675884a4-966e-4ca9-b279-e37202cab1d7" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: E0204 08:58:28.130029 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.130035 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.130196 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.130210 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fcf6d5-331b-4a6e-b65c-3c68d28feb65" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.130223 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="675884a4-966e-4ca9-b279-e37202cab1d7" containerName="mariadb-account-create-update" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.131500 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.142900 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.219406 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.219459 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.219522 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2f8\" (UniqueName: \"kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.321112 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.321163 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.321223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2f8\" (UniqueName: \"kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.322199 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.322460 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.349341 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2f8\" (UniqueName: \"kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8\") pod \"certified-operators-j6zpn\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.459692 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.931810 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:58:28 crc kubenswrapper[4644]: I0204 08:58:28.946530 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1344aa43-93ef-4780-a56d-3eb89d55b1a2-etc-swift\") pod \"swift-storage-0\" (UID: \"1344aa43-93ef-4780-a56d-3eb89d55b1a2\") " pod="openstack/swift-storage-0" Feb 04 08:58:29 crc kubenswrapper[4644]: I0204 08:58:29.136531 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 04 08:58:32 crc kubenswrapper[4644]: I0204 08:58:32.041393 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:32 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:32 crc kubenswrapper[4644]: > Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.426270 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.428206 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.454348 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.511240 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.511301 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjjz\" (UniqueName: \"kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.511350 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.612595 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.612632 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjjz\" (UniqueName: \"kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.612651 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.613074 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.613291 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.633409 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjjz\" (UniqueName: \"kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz\") pod \"redhat-marketplace-jl467\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:33 crc kubenswrapper[4644]: I0204 08:58:33.765194 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.479359 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sltjx" event={"ID":"5a843b53-7ea4-48d9-9c8a-16be734d66c6","Type":"ContainerDied","Data":"a69103c60c78256c1bff4d0b57970ca85bd18a17f3c33a096c7393c4a8d11774"} Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.479732 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69103c60c78256c1bff4d0b57970ca85bd18a17f3c33a096c7393c4a8d11774" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.514577 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559523 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559570 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559599 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttn47\" (UniqueName: \"kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559670 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559760 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559851 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.559878 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts\") pod \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\" (UID: \"5a843b53-7ea4-48d9-9c8a-16be734d66c6\") " Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.597115 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.596618 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.598480 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts" (OuterVolumeSpecName: "scripts") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.604911 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47" (OuterVolumeSpecName: "kube-api-access-ttn47") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "kube-api-access-ttn47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.607371 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.607806 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.616437 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5a843b53-7ea4-48d9-9c8a-16be734d66c6" (UID: "5a843b53-7ea4-48d9-9c8a-16be734d66c6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661803 4644 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661846 4644 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a843b53-7ea4-48d9-9c8a-16be734d66c6-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661858 4644 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661870 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a843b53-7ea4-48d9-9c8a-16be734d66c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661882 4644 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661892 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a843b53-7ea4-48d9-9c8a-16be734d66c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.661903 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttn47\" (UniqueName: \"kubernetes.io/projected/5a843b53-7ea4-48d9-9c8a-16be734d66c6-kube-api-access-ttn47\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:35 crc kubenswrapper[4644]: I0204 08:58:35.832236 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:35 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:35 crc kubenswrapper[4644]: > Feb 04 08:58:36 crc kubenswrapper[4644]: I0204 08:58:36.488378 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sltjx" Feb 04 08:58:42 crc kubenswrapper[4644]: I0204 08:58:42.107301 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:42 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:42 crc kubenswrapper[4644]: > Feb 04 08:58:42 crc kubenswrapper[4644]: I0204 08:58:42.754109 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:43 crc kubenswrapper[4644]: I0204 08:58:43.612497 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerStarted","Data":"b74ec58df6bb6e819e265ae4232f50d2985c3dbaeede9fcde9ba60d29e6760c0"} Feb 04 08:58:43 crc kubenswrapper[4644]: I0204 08:58:43.651572 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:58:43 crc kubenswrapper[4644]: W0204 08:58:43.692451 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284c969e_fe7d_4d02_9f64_440e19f9f2ff.slice/crio-885e6a2319c537d4d03afee48da82cec04fcc9a0d405d6b4da410e69c7371515 WatchSource:0}: Error finding container 885e6a2319c537d4d03afee48da82cec04fcc9a0d405d6b4da410e69c7371515: Status 404 returned error can't find the container with id 885e6a2319c537d4d03afee48da82cec04fcc9a0d405d6b4da410e69c7371515 Feb 04 08:58:43 crc kubenswrapper[4644]: I0204 08:58:43.878445 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 04 08:58:43 crc kubenswrapper[4644]: W0204 08:58:43.890420 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1344aa43_93ef_4780_a56d_3eb89d55b1a2.slice/crio-197037d1e7e0b0804c25dd2f9f76fc80852e738a62ea3940ad6b935cdabbc046 WatchSource:0}: Error finding container 197037d1e7e0b0804c25dd2f9f76fc80852e738a62ea3940ad6b935cdabbc046: Status 404 returned error can't find the container with id 197037d1e7e0b0804c25dd2f9f76fc80852e738a62ea3940ad6b935cdabbc046 Feb 04 08:58:44 crc kubenswrapper[4644]: I0204 08:58:44.623265 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerStarted","Data":"885e6a2319c537d4d03afee48da82cec04fcc9a0d405d6b4da410e69c7371515"} Feb 04 08:58:44 crc kubenswrapper[4644]: I0204 08:58:44.624888 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"197037d1e7e0b0804c25dd2f9f76fc80852e738a62ea3940ad6b935cdabbc046"} Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.636486 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6dczk" event={"ID":"f1573f43-1a60-4b32-8286-02fb06f9d3a8","Type":"ContainerStarted","Data":"fd7100471e7661330f3d0a9dbe264cf5cfea675d295fb99705c0588b8176eb5f"} Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.639938 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4mlq" event={"ID":"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31","Type":"ContainerStarted","Data":"8f930a01575e8ee275b492efdf0edac94a3b843708612bc102aa5552ad07850e"} Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.641625 4644 generic.go:334] "Generic (PLEG): container finished" podID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerID="0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55" exitCode=0 Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.641667 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerDied","Data":"0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55"} Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.651439 4644 generic.go:334] "Generic (PLEG): container finished" podID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerID="a35a9b227b1e88b73b31ab00247872497b953c80f34eb19f935a3814ffa32a7a" exitCode=0 Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.651472 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerDied","Data":"a35a9b227b1e88b73b31ab00247872497b953c80f34eb19f935a3814ffa32a7a"} Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.687861 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6dczk" podStartSLOduration=3.7619347469999997 podStartE2EDuration="31.687841877s" podCreationTimestamp="2026-02-04 08:58:14 +0000 UTC" firstStartedPulling="2026-02-04 08:58:15.422121038 +0000 UTC m=+1005.462178793" lastFinishedPulling="2026-02-04 08:58:43.348028168 +0000 UTC m=+1033.388085923" observedRunningTime="2026-02-04 08:58:45.659648945 +0000 UTC m=+1035.699706700" watchObservedRunningTime="2026-02-04 08:58:45.687841877 +0000 UTC m=+1035.727899632" Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.713742 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f4mlq" podStartSLOduration=4.280379531 podStartE2EDuration="27.713721875s" podCreationTimestamp="2026-02-04 08:58:18 +0000 UTC" firstStartedPulling="2026-02-04 08:58:19.906616753 +0000 UTC m=+1009.946674508" lastFinishedPulling="2026-02-04 08:58:43.339959097 +0000 UTC m=+1033.380016852" observedRunningTime="2026-02-04 08:58:45.707299769 +0000 UTC m=+1035.747357524" watchObservedRunningTime="2026-02-04 08:58:45.713721875 +0000 UTC m=+1035.753779630" Feb 04 08:58:45 crc kubenswrapper[4644]: I0204 08:58:45.852370 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:45 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:45 crc kubenswrapper[4644]: > Feb 04 08:58:47 crc kubenswrapper[4644]: I0204 08:58:47.677752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"9fd7a3d8ce373c838ea25dd10814b7bfb35ba1bb42d5840c8bc90458252eedc2"} Feb 04 08:58:47 crc kubenswrapper[4644]: I0204 08:58:47.678185 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"1cb76c240ecd2c483a49caa07eb98e2a2971a184a12d4b46944bfd5f528dca9f"} Feb 04 08:58:47 crc kubenswrapper[4644]: I0204 08:58:47.681434 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerStarted","Data":"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969"} Feb 04 08:58:47 crc kubenswrapper[4644]: I0204 08:58:47.695103 4644 generic.go:334] "Generic (PLEG): container finished" podID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerID="4eafdfd11c55cfdb664f6b4933448fdfef28d7f3febff61e570b53fb7e362f0a" exitCode=0 Feb 04 08:58:47 crc kubenswrapper[4644]: I0204 08:58:47.695141 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerDied","Data":"4eafdfd11c55cfdb664f6b4933448fdfef28d7f3febff61e570b53fb7e362f0a"} Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.709086 4644 generic.go:334] "Generic (PLEG): container finished" podID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerID="da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969" exitCode=0 Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.709238 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerDied","Data":"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969"} Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.717394 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerStarted","Data":"b129ca6a687b391322f96e9a73f0dfdc557c1c65fb3d694d0242b2bd0dddc4d0"} Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.720962 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"eaa2a9c9fbbc7f8777d04315a01b21e12d695b1b3f01474465bdb931ee0b4d60"} Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.720993 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"145af54c10b3bb71a3dc774e6150a003f08367964ceb36e518464c3fa3f8d7a4"} Feb 04 08:58:48 crc kubenswrapper[4644]: I0204 08:58:48.753908 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jl467" podStartSLOduration=13.161305558 podStartE2EDuration="15.753887806s" podCreationTimestamp="2026-02-04 08:58:33 +0000 UTC" firstStartedPulling="2026-02-04 08:58:45.661125385 +0000 UTC m=+1035.701183130" lastFinishedPulling="2026-02-04 08:58:48.253707613 +0000 UTC m=+1038.293765378" observedRunningTime="2026-02-04 08:58:48.751871702 +0000 UTC m=+1038.791929487" watchObservedRunningTime="2026-02-04 08:58:48.753887806 +0000 UTC m=+1038.793945561" Feb 04 08:58:52 crc kubenswrapper[4644]: I0204 08:58:52.037097 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:52 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:52 crc kubenswrapper[4644]: > Feb 04 08:58:52 crc kubenswrapper[4644]: I0204 08:58:52.767771 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"ab495897868e1de79ffbaf428ab73c5e32cb59dd4c6144feba7ef00d1ee58f9e"} Feb 04 08:58:52 crc kubenswrapper[4644]: I0204 08:58:52.771842 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerStarted","Data":"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6"} Feb 04 08:58:52 crc kubenswrapper[4644]: I0204 08:58:52.795202 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j6zpn" podStartSLOduration=18.974978726 podStartE2EDuration="24.795177716s" podCreationTimestamp="2026-02-04 08:58:28 +0000 UTC" firstStartedPulling="2026-02-04 08:58:45.645503598 +0000 UTC m=+1035.685561353" lastFinishedPulling="2026-02-04 08:58:51.465702578 +0000 UTC m=+1041.505760343" observedRunningTime="2026-02-04 08:58:52.787572238 +0000 UTC m=+1042.827630003" watchObservedRunningTime="2026-02-04 08:58:52.795177716 +0000 UTC m=+1042.835235471" Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.765665 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.766068 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.785601 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"4bdf977a93c434a8e78fafb5a231b4422b76d9a5caff2f4d20f2d99513e917c6"} Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.785642 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"ec1e80b097f8ccd321d2555f4a03c340de4b173b1735ffd3a221550ec63f65b8"} Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.814438 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:53 crc kubenswrapper[4644]: I0204 08:58:53.885257 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:54 crc kubenswrapper[4644]: I0204 08:58:54.057267 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:55 crc kubenswrapper[4644]: I0204 08:58:55.803000 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jl467" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="registry-server" containerID="cri-o://b129ca6a687b391322f96e9a73f0dfdc557c1c65fb3d694d0242b2bd0dddc4d0" gracePeriod=2 Feb 04 08:58:55 crc kubenswrapper[4644]: I0204 08:58:55.841882 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:58:55 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:58:55 crc kubenswrapper[4644]: > Feb 04 08:58:56 crc kubenswrapper[4644]: I0204 08:58:56.820177 4644 generic.go:334] "Generic (PLEG): container finished" podID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerID="b129ca6a687b391322f96e9a73f0dfdc557c1c65fb3d694d0242b2bd0dddc4d0" exitCode=0 Feb 04 08:58:56 crc kubenswrapper[4644]: I0204 08:58:56.820226 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerDied","Data":"b129ca6a687b391322f96e9a73f0dfdc557c1c65fb3d694d0242b2bd0dddc4d0"} Feb 04 08:58:56 crc kubenswrapper[4644]: I0204 08:58:56.995263 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.130892 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content\") pod \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.131021 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snjjz\" (UniqueName: \"kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz\") pod \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.131090 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities\") pod \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\" (UID: \"7898bc13-6a9d-414d-9552-86ef0d51ac5e\") " Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.132380 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities" (OuterVolumeSpecName: "utilities") pod "7898bc13-6a9d-414d-9552-86ef0d51ac5e" (UID: "7898bc13-6a9d-414d-9552-86ef0d51ac5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.136626 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz" (OuterVolumeSpecName: "kube-api-access-snjjz") pod "7898bc13-6a9d-414d-9552-86ef0d51ac5e" (UID: "7898bc13-6a9d-414d-9552-86ef0d51ac5e"). InnerVolumeSpecName "kube-api-access-snjjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.161904 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7898bc13-6a9d-414d-9552-86ef0d51ac5e" (UID: "7898bc13-6a9d-414d-9552-86ef0d51ac5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.232644 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snjjz\" (UniqueName: \"kubernetes.io/projected/7898bc13-6a9d-414d-9552-86ef0d51ac5e-kube-api-access-snjjz\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.232685 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.232695 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7898bc13-6a9d-414d-9552-86ef0d51ac5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.832449 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jl467" event={"ID":"7898bc13-6a9d-414d-9552-86ef0d51ac5e","Type":"ContainerDied","Data":"b74ec58df6bb6e819e265ae4232f50d2985c3dbaeede9fcde9ba60d29e6760c0"} Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.832765 4644 scope.go:117] "RemoveContainer" containerID="b129ca6a687b391322f96e9a73f0dfdc557c1c65fb3d694d0242b2bd0dddc4d0" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.832449 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jl467" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.836573 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"a11eb99eaf047d24a6758abf5c7a19305b738cec58a5020970da1df6c1658824"} Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.870087 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.879703 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jl467"] Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.885960 4644 scope.go:117] "RemoveContainer" containerID="4eafdfd11c55cfdb664f6b4933448fdfef28d7f3febff61e570b53fb7e362f0a" Feb 04 08:58:57 crc kubenswrapper[4644]: I0204 08:58:57.921867 4644 scope.go:117] "RemoveContainer" containerID="a35a9b227b1e88b73b31ab00247872497b953c80f34eb19f935a3814ffa32a7a" Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.460003 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.460361 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.509761 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.671131 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" path="/var/lib/kubelet/pods/7898bc13-6a9d-414d-9552-86ef0d51ac5e/volumes" Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.855453 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"41b1f9cd5a317d90d88489d06350af54c591289715a69ad4698c12314c29c33a"} Feb 04 08:58:58 crc kubenswrapper[4644]: I0204 08:58:58.903520 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:58:59 crc kubenswrapper[4644]: I0204 08:58:59.455198 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:58:59 crc kubenswrapper[4644]: I0204 08:58:59.867810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"6d8d445606a52bc137b04cd312b7114d1c28547cd658cb3e142ed4e405483cb5"} Feb 04 08:58:59 crc kubenswrapper[4644]: I0204 08:58:59.868110 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"5f4d18fa153939f53de9c2daa4fee0e9ab883dc4171f606ab7501d750be483dd"} Feb 04 08:58:59 crc kubenswrapper[4644]: I0204 08:58:59.868120 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"01f9a4e73f6c504595129b41bf12a10ace28111aaca4097068bd479a28965474"} Feb 04 08:58:59 crc kubenswrapper[4644]: I0204 08:58:59.868128 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"4d1990904c0a499edc1ce769de94d76b2a7d86cfe05b2f176462024c5ca87482"} Feb 04 08:59:00 crc kubenswrapper[4644]: I0204 08:59:00.881628 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"7e4b0ec53449ebd759ebf574fe14226b90b1d680b02dbd2d726c7d61c65c12c7"} Feb 04 08:59:00 crc kubenswrapper[4644]: I0204 08:59:00.882006 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1344aa43-93ef-4780-a56d-3eb89d55b1a2","Type":"ContainerStarted","Data":"78a772f6a0d69d96bd1cf7b6ba8fe08617e2a2fb56218852b30c1b76f08225e7"} Feb 04 08:59:00 crc kubenswrapper[4644]: I0204 08:59:00.881786 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j6zpn" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="registry-server" containerID="cri-o://1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6" gracePeriod=2 Feb 04 08:59:00 crc kubenswrapper[4644]: I0204 08:59:00.925774 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=51.361855179 podStartE2EDuration="1m5.9257569s" podCreationTimestamp="2026-02-04 08:57:55 +0000 UTC" firstStartedPulling="2026-02-04 08:58:43.894815358 +0000 UTC m=+1033.934873113" lastFinishedPulling="2026-02-04 08:58:58.458717079 +0000 UTC m=+1048.498774834" observedRunningTime="2026-02-04 08:59:00.923139568 +0000 UTC m=+1050.963197344" watchObservedRunningTime="2026-02-04 08:59:00.9257569 +0000 UTC m=+1050.965814655" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.048707 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.097993 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.310098 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:01 crc kubenswrapper[4644]: E0204 08:59:01.310665 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="extract-utilities" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.310781 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="extract-utilities" Feb 04 08:59:01 crc kubenswrapper[4644]: E0204 08:59:01.310885 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a843b53-7ea4-48d9-9c8a-16be734d66c6" containerName="swift-ring-rebalance" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.310966 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a843b53-7ea4-48d9-9c8a-16be734d66c6" containerName="swift-ring-rebalance" Feb 04 08:59:01 crc kubenswrapper[4644]: E0204 08:59:01.311061 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="registry-server" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.311150 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="registry-server" Feb 04 08:59:01 crc kubenswrapper[4644]: E0204 08:59:01.311234 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="extract-content" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.311310 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="extract-content" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.311876 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="7898bc13-6a9d-414d-9552-86ef0d51ac5e" containerName="registry-server" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.312303 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a843b53-7ea4-48d9-9c8a-16be734d66c6" containerName="swift-ring-rebalance" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.313540 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.327129 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.335173 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.381749 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.419802 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.419837 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.419906 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2w7\" (UniqueName: \"kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.419927 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.419962 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.420027 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.520990 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities\") pod \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.521949 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities" (OuterVolumeSpecName: "utilities") pod "284c969e-fe7d-4d02-9f64-440e19f9f2ff" (UID: "284c969e-fe7d-4d02-9f64-440e19f9f2ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.522208 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content\") pod \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.523618 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2f8\" (UniqueName: \"kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8\") pod \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\" (UID: \"284c969e-fe7d-4d02-9f64-440e19f9f2ff\") " Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.523939 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524173 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524204 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524257 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2w7\" (UniqueName: \"kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524287 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524397 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.524545 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.525681 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.527101 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.527235 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.527265 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.527272 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.535509 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8" (OuterVolumeSpecName: "kube-api-access-lv2f8") pod "284c969e-fe7d-4d02-9f64-440e19f9f2ff" (UID: "284c969e-fe7d-4d02-9f64-440e19f9f2ff"). InnerVolumeSpecName "kube-api-access-lv2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.553473 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2w7\" (UniqueName: \"kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7\") pod \"dnsmasq-dns-5c79d794d7-cg4bb\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.571898 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "284c969e-fe7d-4d02-9f64-440e19f9f2ff" (UID: "284c969e-fe7d-4d02-9f64-440e19f9f2ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.694877 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.886059 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c969e-fe7d-4d02-9f64-440e19f9f2ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.886085 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2f8\" (UniqueName: \"kubernetes.io/projected/284c969e-fe7d-4d02-9f64-440e19f9f2ff-kube-api-access-lv2f8\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.960349 4644 generic.go:334] "Generic (PLEG): container finished" podID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerID="1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6" exitCode=0 Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.960527 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6zpn" Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.960553 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerDied","Data":"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6"} Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.961650 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6zpn" event={"ID":"284c969e-fe7d-4d02-9f64-440e19f9f2ff","Type":"ContainerDied","Data":"885e6a2319c537d4d03afee48da82cec04fcc9a0d405d6b4da410e69c7371515"} Feb 04 08:59:01 crc kubenswrapper[4644]: I0204 08:59:01.961675 4644 scope.go:117] "RemoveContainer" containerID="1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.024628 4644 scope.go:117] "RemoveContainer" containerID="da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.028734 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.042645 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j6zpn"] Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.065057 4644 scope.go:117] "RemoveContainer" containerID="0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.095986 4644 scope.go:117] "RemoveContainer" containerID="1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6" Feb 04 08:59:02 crc kubenswrapper[4644]: E0204 08:59:02.097013 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6\": container with ID starting with 1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6 not found: ID does not exist" containerID="1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.097049 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6"} err="failed to get container status \"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6\": rpc error: code = NotFound desc = could not find container \"1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6\": container with ID starting with 1a1081177eb5ad76c153723295ce70e05a82295725edd016a3045a8d9e7579d6 not found: ID does not exist" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.097071 4644 scope.go:117] "RemoveContainer" containerID="da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969" Feb 04 08:59:02 crc kubenswrapper[4644]: E0204 08:59:02.097361 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969\": container with ID starting with da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969 not found: ID does not exist" containerID="da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.097390 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969"} err="failed to get container status \"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969\": rpc error: code = NotFound desc = could not find container \"da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969\": container with ID starting with da4a4d3877aae1ee2a1ea3ab974d0e4b22895c4a62a732aec5a20b9018bc1969 not found: ID does not exist" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.097409 4644 scope.go:117] "RemoveContainer" containerID="0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55" Feb 04 08:59:02 crc kubenswrapper[4644]: E0204 08:59:02.098263 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55\": container with ID starting with 0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55 not found: ID does not exist" containerID="0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.098290 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55"} err="failed to get container status \"0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55\": rpc error: code = NotFound desc = could not find container \"0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55\": container with ID starting with 0dccb4f5f72be1ac97007e2f79de037c1950179271f60ee0a235406084ea6f55 not found: ID does not exist" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.399038 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:02 crc kubenswrapper[4644]: W0204 08:59:02.402689 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3918b954_dba6_4e50_988d_5202be75f235.slice/crio-155050f1d6b4940f7e22ca4edc68dcf56fbea630cbfd140a615329705e2b5a68 WatchSource:0}: Error finding container 155050f1d6b4940f7e22ca4edc68dcf56fbea630cbfd140a615329705e2b5a68: Status 404 returned error can't find the container with id 155050f1d6b4940f7e22ca4edc68dcf56fbea630cbfd140a615329705e2b5a68 Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.676815 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" path="/var/lib/kubelet/pods/284c969e-fe7d-4d02-9f64-440e19f9f2ff/volumes" Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.971145 4644 generic.go:334] "Generic (PLEG): container finished" podID="3918b954-dba6-4e50-988d-5202be75f235" containerID="9eb924c3b11bc9b1ab0d1dc58f3dfaa5819443de38db181151f824380582d4b3" exitCode=0 Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.971267 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" event={"ID":"3918b954-dba6-4e50-988d-5202be75f235","Type":"ContainerDied","Data":"9eb924c3b11bc9b1ab0d1dc58f3dfaa5819443de38db181151f824380582d4b3"} Feb 04 08:59:02 crc kubenswrapper[4644]: I0204 08:59:02.971513 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" event={"ID":"3918b954-dba6-4e50-988d-5202be75f235","Type":"ContainerStarted","Data":"155050f1d6b4940f7e22ca4edc68dcf56fbea630cbfd140a615329705e2b5a68"} Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.246264 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.246800 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vrbhk" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" containerID="cri-o://4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60" gracePeriod=2 Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.708196 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.822603 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content\") pod \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.822802 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities\") pod \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.822949 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kws5s\" (UniqueName: \"kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s\") pod \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\" (UID: \"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4\") " Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.823752 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities" (OuterVolumeSpecName: "utilities") pod "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" (UID: "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.830222 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s" (OuterVolumeSpecName: "kube-api-access-kws5s") pod "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" (UID: "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4"). InnerVolumeSpecName "kube-api-access-kws5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.881183 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" (UID: "c8c12fd3-0cf3-47c2-9b41-edf74e7645d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.924840 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kws5s\" (UniqueName: \"kubernetes.io/projected/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-kube-api-access-kws5s\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.924871 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.924880 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.980685 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" containerID="8f930a01575e8ee275b492efdf0edac94a3b843708612bc102aa5552ad07850e" exitCode=0 Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.980768 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4mlq" event={"ID":"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31","Type":"ContainerDied","Data":"8f930a01575e8ee275b492efdf0edac94a3b843708612bc102aa5552ad07850e"} Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.985362 4644 generic.go:334] "Generic (PLEG): container finished" podID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerID="4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60" exitCode=0 Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.985443 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerDied","Data":"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60"} Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.985458 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbhk" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.985472 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbhk" event={"ID":"c8c12fd3-0cf3-47c2-9b41-edf74e7645d4","Type":"ContainerDied","Data":"ccdb3ed4a6448042f0e8f26dd935caef3175886a0f075fb40dbf69d1eb2d3019"} Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.985491 4644 scope.go:117] "RemoveContainer" containerID="4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60" Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.988803 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" event={"ID":"3918b954-dba6-4e50-988d-5202be75f235","Type":"ContainerStarted","Data":"c01f2d7ed8a0e586a737492f221441763349262902ac50c556d2e96683bb3f61"} Feb 04 08:59:03 crc kubenswrapper[4644]: I0204 08:59:03.989040 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.016989 4644 scope.go:117] "RemoveContainer" containerID="8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.030826 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" podStartSLOduration=3.030805419 podStartE2EDuration="3.030805419s" podCreationTimestamp="2026-02-04 08:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:59:04.022095581 +0000 UTC m=+1054.062153356" watchObservedRunningTime="2026-02-04 08:59:04.030805419 +0000 UTC m=+1054.070863194" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.057482 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.062280 4644 scope.go:117] "RemoveContainer" containerID="41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.067561 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vrbhk"] Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.086606 4644 scope.go:117] "RemoveContainer" containerID="4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60" Feb 04 08:59:04 crc kubenswrapper[4644]: E0204 08:59:04.087485 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60\": container with ID starting with 4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60 not found: ID does not exist" containerID="4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.087521 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60"} err="failed to get container status \"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60\": rpc error: code = NotFound desc = could not find container \"4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60\": container with ID starting with 4801b5e16c2da076431365c30986a247fefed3d8248b8621dc295f73b0274c60 not found: ID does not exist" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.087560 4644 scope.go:117] "RemoveContainer" containerID="8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493" Feb 04 08:59:04 crc kubenswrapper[4644]: E0204 08:59:04.087835 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493\": container with ID starting with 8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493 not found: ID does not exist" containerID="8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.087881 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493"} err="failed to get container status \"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493\": rpc error: code = NotFound desc = could not find container \"8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493\": container with ID starting with 8a3c344b2aad4893de2980f6dd18367f287ee7da95b374402d22321f10794493 not found: ID does not exist" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.087919 4644 scope.go:117] "RemoveContainer" containerID="41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c" Feb 04 08:59:04 crc kubenswrapper[4644]: E0204 08:59:04.088406 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c\": container with ID starting with 41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c not found: ID does not exist" containerID="41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.088445 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c"} err="failed to get container status \"41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c\": rpc error: code = NotFound desc = could not find container \"41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c\": container with ID starting with 41f61709ba34f8235ac74d240dd46572aa8f5b1dcc2a0d26546369bbd1e0629c not found: ID does not exist" Feb 04 08:59:04 crc kubenswrapper[4644]: I0204 08:59:04.674730 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" path="/var/lib/kubelet/pods/c8c12fd3-0cf3-47c2-9b41-edf74e7645d4/volumes" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.356272 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.451181 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9mr\" (UniqueName: \"kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr\") pod \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.451232 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data\") pod \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.451279 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle\") pod \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\" (UID: \"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31\") " Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.457713 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr" (OuterVolumeSpecName: "kube-api-access-vg9mr") pod "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" (UID: "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31"). InnerVolumeSpecName "kube-api-access-vg9mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.480049 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" (UID: "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.507532 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data" (OuterVolumeSpecName: "config-data") pod "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" (UID: "b6e10c7c-ed36-46c7-8f45-1428c6a7fa31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.554136 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9mr\" (UniqueName: \"kubernetes.io/projected/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-kube-api-access-vg9mr\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.554172 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.554182 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:05 crc kubenswrapper[4644]: I0204 08:59:05.856023 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:59:05 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:59:05 crc kubenswrapper[4644]: > Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.009858 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4mlq" event={"ID":"b6e10c7c-ed36-46c7-8f45-1428c6a7fa31","Type":"ContainerDied","Data":"86b2413aa2f9a5d51268966e135d31cacabfa253a39406c12b2959ca648dccd1"} Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.009900 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b2413aa2f9a5d51268966e135d31cacabfa253a39406c12b2959ca648dccd1" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.009909 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4mlq" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.254245 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.254754 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="dnsmasq-dns" containerID="cri-o://c01f2d7ed8a0e586a737492f221441763349262902ac50c556d2e96683bb3f61" gracePeriod=10 Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.282917 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xkk6h"] Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283514 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="extract-content" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283543 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="extract-content" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283573 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="extract-content" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283580 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="extract-content" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283592 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="extract-utilities" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283599 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="extract-utilities" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283609 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="extract-utilities" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283615 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="extract-utilities" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283638 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283645 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283660 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" containerName="keystone-db-sync" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283666 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" containerName="keystone-db-sync" Feb 04 08:59:06 crc kubenswrapper[4644]: E0204 08:59:06.283681 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283693 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283939 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c12fd3-0cf3-47c2-9b41-edf74e7645d4" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283957 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" containerName="keystone-db-sync" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.283968 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="284c969e-fe7d-4d02-9f64-440e19f9f2ff" containerName="registry-server" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.284927 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.291742 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.291941 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.292150 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4x467" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.292341 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.292587 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.329369 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.331086 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.369621 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.379587 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xkk6h"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382159 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382239 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382275 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382300 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfzn\" (UniqueName: \"kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382373 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.382393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484241 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484290 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484320 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484358 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484391 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm79p\" (UniqueName: \"kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484413 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484445 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484463 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484480 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484500 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484533 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.484556 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prfzn\" (UniqueName: \"kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.498015 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.498867 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.499154 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.500296 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.519005 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.540954 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prfzn\" (UniqueName: \"kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn\") pod \"keystone-bootstrap-xkk6h\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.587943 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.588001 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.588055 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm79p\" (UniqueName: \"kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.588106 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.588127 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.588157 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.589194 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.589887 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.590108 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.590587 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.590737 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.616233 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.617579 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.617655 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.628973 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wlwtb"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.630003 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.630635 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fg2kc" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.630778 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.649002 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm79p\" (UniqueName: \"kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p\") pod \"dnsmasq-dns-5b868669f-5l8f7\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.650108 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.650346 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.650591 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mpwwp" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.650722 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.650899 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.664670 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.695364 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.695409 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.695435 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xvv\" (UniqueName: \"kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.695452 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.695532 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.702390 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wlwtb"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.726231 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.728245 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.750936 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.751351 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.762166 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:06 crc kubenswrapper[4644]: I0204 08:59:06.782489 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9pl45"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.789536 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.797985 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.798145 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.798178 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.798213 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xvv\" (UniqueName: \"kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.798229 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.803893 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.801405 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.818858 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.827112 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.834119 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l25sz" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.834504 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.836230 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.868698 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xvv\" (UniqueName: \"kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv\") pod \"horizon-847d8d4b6f-jkqkx\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.886163 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899205 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899258 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899275 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899306 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899395 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.899418 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n62d\" (UniqueName: \"kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.929012 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9pl45"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.934431 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nlr7w"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.935441 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.946938 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zpdz8" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.947147 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:06.947489 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001358 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001401 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001424 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001445 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n62d\" (UniqueName: \"kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001461 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001481 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001523 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001548 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9lw\" (UniqueName: \"kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001578 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001599 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001621 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001639 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001653 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001671 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vcc\" (UniqueName: \"kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001690 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.001734 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.008627 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.022605 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nlr7w"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.026570 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.028775 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.039486 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.040259 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.043348 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.070618 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n62d\" (UniqueName: \"kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d\") pod \"cinder-db-sync-wlwtb\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.083373 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wlwtb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.088397 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6tq9r"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.089615 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103260 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jz5\" (UniqueName: \"kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103321 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103365 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9lw\" (UniqueName: \"kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103404 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103428 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103474 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103492 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103510 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25vcc\" (UniqueName: \"kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103535 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103560 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103607 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103623 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103642 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.103666 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.107826 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hmrt5" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.107881 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.111740 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.124383 4644 generic.go:334] "Generic (PLEG): container finished" podID="3918b954-dba6-4e50-988d-5202be75f235" containerID="c01f2d7ed8a0e586a737492f221441763349262902ac50c556d2e96683bb3f61" exitCode=0 Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.124430 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" event={"ID":"3918b954-dba6-4e50-988d-5202be75f235","Type":"ContainerDied","Data":"c01f2d7ed8a0e586a737492f221441763349262902ac50c556d2e96683bb3f61"} Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.156439 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.157126 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.186655 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.186717 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6tq9r"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208585 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208646 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5lrj\" (UniqueName: \"kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208676 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208717 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208760 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208779 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208812 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jz5\" (UniqueName: \"kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.208861 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.211164 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.211279 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.219639 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.225073 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.226869 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.227488 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vcc\" (UniqueName: \"kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.227674 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.227851 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data\") pod \"ceilometer-0\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.227852 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9lw\" (UniqueName: \"kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw\") pod \"neutron-db-sync-9pl45\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.236999 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.242022 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.244298 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jz5\" (UniqueName: \"kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5\") pod \"placement-db-sync-nlr7w\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.253153 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.254563 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.312732 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nlr7w" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.324822 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.325419 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.325468 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.325698 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5lrj\" (UniqueName: \"kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.365987 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.446851 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.446992 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4g8v\" (UniqueName: \"kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.447129 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.447146 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.447202 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.447970 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.500047 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9pl45" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.520376 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.564757 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5lrj\" (UniqueName: \"kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj\") pod \"barbican-db-sync-6tq9r\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.584933 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4g8v\" (UniqueName: \"kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.585117 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.585138 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.585164 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.585283 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.586424 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.586693 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.591393 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.611944 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.612477 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.614789 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.632272 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4g8v\" (UniqueName: \"kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v\") pod \"horizon-54895dbb5-ssrzj\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.642614 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.730570 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tq9r" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788356 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788411 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788443 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsq4\" (UniqueName: \"kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788536 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.788587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.886798 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.889898 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.889962 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.889992 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.890023 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsq4\" (UniqueName: \"kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.890054 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.890102 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.893512 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.904139 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.905195 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.905792 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.905804 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:07.915256 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsq4\" (UniqueName: \"kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4\") pod \"dnsmasq-dns-cf78879c9-79mcx\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.053752 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.484531 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602145 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602196 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602277 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602450 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602531 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.602577 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2w7\" (UniqueName: \"kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7\") pod \"3918b954-dba6-4e50-988d-5202be75f235\" (UID: \"3918b954-dba6-4e50-988d-5202be75f235\") " Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.627577 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7" (OuterVolumeSpecName: "kube-api-access-bt2w7") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "kube-api-access-bt2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.704668 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt2w7\" (UniqueName: \"kubernetes.io/projected/3918b954-dba6-4e50-988d-5202be75f235-kube-api-access-bt2w7\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.716155 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config" (OuterVolumeSpecName: "config") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.730969 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.752788 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.763935 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.783746 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3918b954-dba6-4e50-988d-5202be75f235" (UID: "3918b954-dba6-4e50-988d-5202be75f235"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.799533 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nlr7w"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.808173 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.808204 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.808214 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.808224 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.808232 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3918b954-dba6-4e50-988d-5202be75f235-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.811100 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:08 crc kubenswrapper[4644]: W0204 08:59:08.815386 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6677efd_b2e4_45b7_8703_3a189d87723d.slice/crio-aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf WatchSource:0}: Error finding container aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf: Status 404 returned error can't find the container with id aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.826089 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.846458 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6tq9r"] Feb 04 08:59:08 crc kubenswrapper[4644]: I0204 08:59:08.852993 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.011942 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wlwtb"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.042774 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9pl45"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.078029 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xkk6h"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.133783 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.167033 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.167094 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.210844 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.234923 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:09 crc kubenswrapper[4644]: E0204 08:59:09.235294 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="init" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.235306 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="init" Feb 04 08:59:09 crc kubenswrapper[4644]: E0204 08:59:09.235348 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="dnsmasq-dns" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.235356 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="dnsmasq-dns" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.235542 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3918b954-dba6-4e50-988d-5202be75f235" containerName="dnsmasq-dns" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.236399 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.253625 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerStarted","Data":"27f85d3cf912538a58bc74c147131f889352e573cd15cb01463214e3afd991a6"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.263007 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nlr7w" event={"ID":"c6677efd-b2e4-45b7-8703-3a189d87723d","Type":"ContainerStarted","Data":"aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.264497 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.264549 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" event={"ID":"4cdd03c0-eab6-43c6-97b2-af44af5a7179","Type":"ContainerStarted","Data":"f397af0b3db4915831ae034c65d06072bb0c7e1506eb71ea98a14ac937c57530"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.271682 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkk6h" event={"ID":"dbbacde7-db93-44ed-801c-be230c2f1594","Type":"ContainerStarted","Data":"36135450fa4fcbbcc372897e8711a2488887c0bc5f431e83c2cb7537faf97dfb"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.279642 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wlwtb" event={"ID":"f6df95b1-d952-4b17-bb90-2a32fecb0a5b","Type":"ContainerStarted","Data":"92a0752ee0aad4b74673b62ed5660b1fd391c59e85385393df04c155c3b08202"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.295166 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54895dbb5-ssrzj" event={"ID":"41b7c593-91c6-4d69-af50-0e067e3fbea8","Type":"ContainerStarted","Data":"db44f1d80bcdaa428c8874259ec2643ec1443c2d05bc6af35229b845df55b73b"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.298616 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-847d8d4b6f-jkqkx" event={"ID":"630413ac-d24f-412b-a8da-ec0d7f64ed1e","Type":"ContainerStarted","Data":"87dd626b7c80e3f5ea0eebdfe0521606636d3ffb70659afb0699bd9698965e6e"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.305934 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tq9r" event={"ID":"4623241a-c4dc-4646-9b03-aa89b84ca4b1","Type":"ContainerStarted","Data":"2d2a856ec94687a817b78c9268cea98b1157be90c703e6df1e98ed9dc6b4dbf6"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.309090 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" event={"ID":"3918b954-dba6-4e50-988d-5202be75f235","Type":"ContainerDied","Data":"155050f1d6b4940f7e22ca4edc68dcf56fbea630cbfd140a615329705e2b5a68"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.309130 4644 scope.go:117] "RemoveContainer" containerID="c01f2d7ed8a0e586a737492f221441763349262902ac50c556d2e96683bb3f61" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.312591 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cg4bb" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.318463 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.318840 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.318870 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9pl45" event={"ID":"bcc65018-86ae-4c36-bf21-3849c09ee648","Type":"ContainerStarted","Data":"71702e314a7f41021e63c8522e6243fd7ba33d7f2bdec84f2b8187cbc42b9b65"} Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.318992 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.319139 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh74n\" (UniqueName: \"kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.319556 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.362060 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.369873 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cg4bb"] Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.416055 4644 scope.go:117] "RemoveContainer" containerID="9eb924c3b11bc9b1ab0d1dc58f3dfaa5819443de38db181151f824380582d4b3" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.421142 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.421223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.421251 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.421315 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.421380 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh74n\" (UniqueName: \"kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.422069 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.423799 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.425056 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.444807 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.448522 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh74n\" (UniqueName: \"kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n\") pod \"horizon-596fcd84d5-qzdf7\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.599024 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:09 crc kubenswrapper[4644]: I0204 08:59:09.919079 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.345396 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9pl45" event={"ID":"bcc65018-86ae-4c36-bf21-3849c09ee648","Type":"ContainerStarted","Data":"e9b4afe63498807468ab2afa12176fe6e890d0486f5c4adff425ef21e97a8a10"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.368754 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9pl45" podStartSLOduration=4.368728075 podStartE2EDuration="4.368728075s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:59:10.360053507 +0000 UTC m=+1060.400111272" watchObservedRunningTime="2026-02-04 08:59:10.368728075 +0000 UTC m=+1060.408785830" Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.371086 4644 generic.go:334] "Generic (PLEG): container finished" podID="4cdd03c0-eab6-43c6-97b2-af44af5a7179" containerID="d4bc4aac081e3e5231a3b47bac1486ea3a4ceac1959da64ea3c89573ea7c0d06" exitCode=0 Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.371255 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" event={"ID":"4cdd03c0-eab6-43c6-97b2-af44af5a7179","Type":"ContainerDied","Data":"d4bc4aac081e3e5231a3b47bac1486ea3a4ceac1959da64ea3c89573ea7c0d06"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.379664 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkk6h" event={"ID":"dbbacde7-db93-44ed-801c-be230c2f1594","Type":"ContainerStarted","Data":"1b6c8806b58f224f4e456fbcab25c36e534f86ff01340ba639953f271bdcb6af"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.385885 4644 generic.go:334] "Generic (PLEG): container finished" podID="43bcbb21-d5a1-440b-9554-306c249422f3" containerID="50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2" exitCode=0 Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.386034 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" event={"ID":"43bcbb21-d5a1-440b-9554-306c249422f3","Type":"ContainerDied","Data":"50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.386221 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" event={"ID":"43bcbb21-d5a1-440b-9554-306c249422f3","Type":"ContainerStarted","Data":"55efd17c74bf3846f47886401fb42d1c0ddd4a0450cb84727301c58033a3289a"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.395258 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596fcd84d5-qzdf7" event={"ID":"71bf7beb-de53-41c4-a39a-649731849784","Type":"ContainerStarted","Data":"dcdaef6aa094cded089c8f6690cb075dc8bba8f99bcfa2e84e8b463e99b38f96"} Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.459662 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xkk6h" podStartSLOduration=4.459626434 podStartE2EDuration="4.459626434s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:59:10.422531417 +0000 UTC m=+1060.462589172" watchObservedRunningTime="2026-02-04 08:59:10.459626434 +0000 UTC m=+1060.499684189" Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.711703 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3918b954-dba6-4e50-988d-5202be75f235" path="/var/lib/kubelet/pods/3918b954-dba6-4e50-988d-5202be75f235/volumes" Feb 04 08:59:10 crc kubenswrapper[4644]: I0204 08:59:10.887203 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070037 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070253 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070284 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070432 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070463 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm79p\" (UniqueName: \"kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.070494 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config\") pod \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\" (UID: \"4cdd03c0-eab6-43c6-97b2-af44af5a7179\") " Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.076713 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p" (OuterVolumeSpecName: "kube-api-access-xm79p") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "kube-api-access-xm79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.115018 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.116415 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.129969 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.155388 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.169338 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config" (OuterVolumeSpecName: "config") pod "4cdd03c0-eab6-43c6-97b2-af44af5a7179" (UID: "4cdd03c0-eab6-43c6-97b2-af44af5a7179"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174526 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174576 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174589 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174604 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm79p\" (UniqueName: \"kubernetes.io/projected/4cdd03c0-eab6-43c6-97b2-af44af5a7179-kube-api-access-xm79p\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174615 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.174626 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdd03c0-eab6-43c6-97b2-af44af5a7179-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.466431 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" event={"ID":"4cdd03c0-eab6-43c6-97b2-af44af5a7179","Type":"ContainerDied","Data":"f397af0b3db4915831ae034c65d06072bb0c7e1506eb71ea98a14ac937c57530"} Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.466666 4644 scope.go:117] "RemoveContainer" containerID="d4bc4aac081e3e5231a3b47bac1486ea3a4ceac1959da64ea3c89573ea7c0d06" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.466675 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-5l8f7" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.474195 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" event={"ID":"43bcbb21-d5a1-440b-9554-306c249422f3","Type":"ContainerStarted","Data":"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889"} Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.474642 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.514935 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" podStartSLOduration=4.5149139940000005 podStartE2EDuration="4.514913994s" podCreationTimestamp="2026-02-04 08:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:59:11.513842505 +0000 UTC m=+1061.553900260" watchObservedRunningTime="2026-02-04 08:59:11.514913994 +0000 UTC m=+1061.554971749" Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.676878 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:11 crc kubenswrapper[4644]: I0204 08:59:11.705503 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-5l8f7"] Feb 04 08:59:12 crc kubenswrapper[4644]: I0204 08:59:12.711162 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdd03c0-eab6-43c6-97b2-af44af5a7179" path="/var/lib/kubelet/pods/4cdd03c0-eab6-43c6-97b2-af44af5a7179/volumes" Feb 04 08:59:15 crc kubenswrapper[4644]: I0204 08:59:15.928095 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:59:15 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:59:15 crc kubenswrapper[4644]: > Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.158188 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.206096 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 08:59:16 crc kubenswrapper[4644]: E0204 08:59:16.206637 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdd03c0-eab6-43c6-97b2-af44af5a7179" containerName="init" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.206762 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdd03c0-eab6-43c6-97b2-af44af5a7179" containerName="init" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.207018 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdd03c0-eab6-43c6-97b2-af44af5a7179" containerName="init" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.208219 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.217827 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.244919 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.304584 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305099 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305299 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305461 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305564 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcn9q\" (UniqueName: \"kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305649 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305732 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.305837 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.340678 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-658bfcb544-88gj4"] Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.342307 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.359672 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-658bfcb544-88gj4"] Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.406908 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.406966 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.406999 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407022 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-config-data\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407093 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-tls-certs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407119 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407143 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-secret-key\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407204 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407220 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-combined-ca-bundle\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407238 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/676db25f-e0ad-48cc-af2c-88029d6eb80d-logs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407253 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-scripts\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407276 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcn9q\" (UniqueName: \"kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407300 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl5qv\" (UniqueName: \"kubernetes.io/projected/676db25f-e0ad-48cc-af2c-88029d6eb80d-kube-api-access-xl5qv\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.407319 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.408843 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.409689 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.410311 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.418829 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.420934 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.439075 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.446214 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcn9q\" (UniqueName: \"kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q\") pod \"horizon-5fb9db66f6-v84nx\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509367 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-tls-certs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509433 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-secret-key\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509522 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-combined-ca-bundle\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509543 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/676db25f-e0ad-48cc-af2c-88029d6eb80d-logs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509563 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-scripts\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509592 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl5qv\" (UniqueName: \"kubernetes.io/projected/676db25f-e0ad-48cc-af2c-88029d6eb80d-kube-api-access-xl5qv\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.509654 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-config-data\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.511245 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-config-data\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.512275 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/676db25f-e0ad-48cc-af2c-88029d6eb80d-scripts\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.512385 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/676db25f-e0ad-48cc-af2c-88029d6eb80d-logs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.515672 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-combined-ca-bundle\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.516367 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-tls-certs\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.527114 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/676db25f-e0ad-48cc-af2c-88029d6eb80d-horizon-secret-key\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.540003 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl5qv\" (UniqueName: \"kubernetes.io/projected/676db25f-e0ad-48cc-af2c-88029d6eb80d-kube-api-access-xl5qv\") pod \"horizon-658bfcb544-88gj4\" (UID: \"676db25f-e0ad-48cc-af2c-88029d6eb80d\") " pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.611830 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 08:59:16 crc kubenswrapper[4644]: I0204 08:59:16.672593 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.056243 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.134527 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.134795 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" containerID="cri-o://6279bc2710aafb11c8399cc2671efe9db0b87443ab72be2381b53e8052a336ba" gracePeriod=10 Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.598999 4644 generic.go:334] "Generic (PLEG): container finished" podID="42473ac1-38e6-4651-9f0b-13df0950127d" containerID="6279bc2710aafb11c8399cc2671efe9db0b87443ab72be2381b53e8052a336ba" exitCode=0 Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.599096 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" event={"ID":"42473ac1-38e6-4651-9f0b-13df0950127d","Type":"ContainerDied","Data":"6279bc2710aafb11c8399cc2671efe9db0b87443ab72be2381b53e8052a336ba"} Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.603321 4644 generic.go:334] "Generic (PLEG): container finished" podID="f1573f43-1a60-4b32-8286-02fb06f9d3a8" containerID="fd7100471e7661330f3d0a9dbe264cf5cfea675d295fb99705c0588b8176eb5f" exitCode=0 Feb 04 08:59:18 crc kubenswrapper[4644]: I0204 08:59:18.603388 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6dczk" event={"ID":"f1573f43-1a60-4b32-8286-02fb06f9d3a8","Type":"ContainerDied","Data":"fd7100471e7661330f3d0a9dbe264cf5cfea675d295fb99705c0588b8176eb5f"} Feb 04 08:59:20 crc kubenswrapper[4644]: I0204 08:59:20.620864 4644 generic.go:334] "Generic (PLEG): container finished" podID="dbbacde7-db93-44ed-801c-be230c2f1594" containerID="1b6c8806b58f224f4e456fbcab25c36e534f86ff01340ba639953f271bdcb6af" exitCode=0 Feb 04 08:59:20 crc kubenswrapper[4644]: I0204 08:59:20.621091 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkk6h" event={"ID":"dbbacde7-db93-44ed-801c-be230c2f1594","Type":"ContainerDied","Data":"1b6c8806b58f224f4e456fbcab25c36e534f86ff01340ba639953f271bdcb6af"} Feb 04 08:59:20 crc kubenswrapper[4644]: I0204 08:59:20.943301 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:25 crc kubenswrapper[4644]: E0204 08:59:25.329855 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 04 08:59:25 crc kubenswrapper[4644]: E0204 08:59:25.330474 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2jz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-nlr7w_openstack(c6677efd-b2e4-45b7-8703-3a189d87723d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:25 crc kubenswrapper[4644]: E0204 08:59:25.331682 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-nlr7w" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" Feb 04 08:59:25 crc kubenswrapper[4644]: E0204 08:59:25.709279 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-nlr7w" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" Feb 04 08:59:25 crc kubenswrapper[4644]: I0204 08:59:25.837097 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 08:59:25 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 08:59:25 crc kubenswrapper[4644]: > Feb 04 08:59:25 crc kubenswrapper[4644]: I0204 08:59:25.837581 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 08:59:25 crc kubenswrapper[4644]: I0204 08:59:25.838415 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a"} pod="openshift-marketplace/redhat-operators-lffcr" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 04 08:59:25 crc kubenswrapper[4644]: I0204 08:59:25.838518 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" containerID="cri-o://4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a" gracePeriod=30 Feb 04 08:59:25 crc kubenswrapper[4644]: I0204 08:59:25.942035 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:30 crc kubenswrapper[4644]: I0204 08:59:30.943724 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:30 crc kubenswrapper[4644]: I0204 08:59:30.944360 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:59:35 crc kubenswrapper[4644]: I0204 08:59:35.942388 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.651094 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6dczk" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.773715 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data\") pod \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.773756 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92slt\" (UniqueName: \"kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt\") pod \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.773900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data\") pod \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.774026 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle\") pod \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\" (UID: \"f1573f43-1a60-4b32-8286-02fb06f9d3a8\") " Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.779135 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f1573f43-1a60-4b32-8286-02fb06f9d3a8" (UID: "f1573f43-1a60-4b32-8286-02fb06f9d3a8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.781474 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt" (OuterVolumeSpecName: "kube-api-access-92slt") pod "f1573f43-1a60-4b32-8286-02fb06f9d3a8" (UID: "f1573f43-1a60-4b32-8286-02fb06f9d3a8"). InnerVolumeSpecName "kube-api-access-92slt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.803540 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1573f43-1a60-4b32-8286-02fb06f9d3a8" (UID: "f1573f43-1a60-4b32-8286-02fb06f9d3a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.829179 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data" (OuterVolumeSpecName: "config-data") pod "f1573f43-1a60-4b32-8286-02fb06f9d3a8" (UID: "f1573f43-1a60-4b32-8286-02fb06f9d3a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.845402 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6dczk" event={"ID":"f1573f43-1a60-4b32-8286-02fb06f9d3a8","Type":"ContainerDied","Data":"604e72f9452cbebd32debfcc658c3eb8df0eaa80a0418a24c320b0371f38354c"} Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.845449 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604e72f9452cbebd32debfcc658c3eb8df0eaa80a0418a24c320b0371f38354c" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.845467 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6dczk" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.877568 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.877605 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.877618 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92slt\" (UniqueName: \"kubernetes.io/projected/f1573f43-1a60-4b32-8286-02fb06f9d3a8-kube-api-access-92slt\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:38 crc kubenswrapper[4644]: I0204 08:59:38.877628 4644 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f1573f43-1a60-4b32-8286-02fb06f9d3a8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.133262 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 08:59:40 crc kubenswrapper[4644]: E0204 08:59:40.133924 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1573f43-1a60-4b32-8286-02fb06f9d3a8" containerName="glance-db-sync" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.133936 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1573f43-1a60-4b32-8286-02fb06f9d3a8" containerName="glance-db-sync" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.134084 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1573f43-1a60-4b32-8286-02fb06f9d3a8" containerName="glance-db-sync" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.134939 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.167407 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201026 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201065 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201126 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtg9\" (UniqueName: \"kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201174 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201245 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.201280 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.301985 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtg9\" (UniqueName: \"kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.302067 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.302141 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.302177 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.302233 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.302255 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.303146 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.303262 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.309830 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.310038 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.310393 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.325353 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtg9\" (UniqueName: \"kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9\") pod \"dnsmasq-dns-56df8fb6b7-s25jz\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.473210 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 08:59:40 crc kubenswrapper[4644]: I0204 08:59:40.941782 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.005576 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.008715 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.016376 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.050356 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.050971 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.051090 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bh6g5" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.153090 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154232 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154308 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154441 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vz4\" (UniqueName: \"kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154569 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154644 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.154788 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.256832 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.256866 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.256935 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86vz4\" (UniqueName: \"kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.256982 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257003 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257023 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257080 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257481 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257496 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.257791 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.267964 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.268564 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.281910 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.282591 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86vz4\" (UniqueName: \"kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.304691 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.361566 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.380376 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.381740 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.386832 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.395766 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rtj\" (UniqueName: \"kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479697 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479726 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479765 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479860 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479879 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.479908 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581586 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5rtj\" (UniqueName: \"kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581661 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581696 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581740 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581839 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581861 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.581893 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.582485 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.586432 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.586561 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.590185 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.594436 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.597443 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.600751 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5rtj\" (UniqueName: \"kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.610186 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " pod="openstack/glance-default-internal-api-0" Feb 04 08:59:41 crc kubenswrapper[4644]: I0204 08:59:41.717663 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 08:59:42 crc kubenswrapper[4644]: I0204 08:59:42.786664 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 08:59:42 crc kubenswrapper[4644]: I0204 08:59:42.866998 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 08:59:43 crc kubenswrapper[4644]: I0204 08:59:43.885904 4644 generic.go:334] "Generic (PLEG): container finished" podID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerID="4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a" exitCode=0 Feb 04 08:59:43 crc kubenswrapper[4644]: I0204 08:59:43.885964 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerDied","Data":"4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a"} Feb 04 08:59:45 crc kubenswrapper[4644]: I0204 08:59:45.941930 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.106465 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.106990 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n554h64bh648h65h6h5f6h5bch9fhfch64ch685h668h56h66ch5dbh5f8h8bh97h589h58h664h55h58hd5h687h667h544hddh58fh56h75h566q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4g8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-54895dbb5-ssrzj_openstack(41b7c593-91c6-4d69-af50-0e067e3fbea8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.114509 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-54895dbb5-ssrzj" podUID="41b7c593-91c6-4d69-af50-0e067e3fbea8" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.203421 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.203618 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch68h79h566h5dfh5cch5f4h564hfh695h569h578h67hffh65bh6fh64bh8bh5f7hfh54bh9bh556h5fhdch678hbdh667h67bh99hc8h64q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9xvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-847d8d4b6f-jkqkx_openstack(630413ac-d24f-412b-a8da-ec0d7f64ed1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:50 crc kubenswrapper[4644]: E0204 08:59:50.206110 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-847d8d4b6f-jkqkx" podUID="630413ac-d24f-412b-a8da-ec0d7f64ed1e" Feb 04 08:59:50 crc kubenswrapper[4644]: I0204 08:59:50.942087 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.117160 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.284984 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.285069 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prfzn\" (UniqueName: \"kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.285204 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.285303 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.285347 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.285386 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys\") pod \"dbbacde7-db93-44ed-801c-be230c2f1594\" (UID: \"dbbacde7-db93-44ed-801c-be230c2f1594\") " Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.291807 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts" (OuterVolumeSpecName: "scripts") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.292834 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.293159 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn" (OuterVolumeSpecName: "kube-api-access-prfzn") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "kube-api-access-prfzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.312780 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data" (OuterVolumeSpecName: "config-data") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.322350 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.342673 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.342838 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2jz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-nlr7w_openstack(c6677efd-b2e4-45b7-8703-3a189d87723d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.347030 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-nlr7w" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.379856 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbbacde7-db93-44ed-801c-be230c2f1594" (UID: "dbbacde7-db93-44ed-801c-be230c2f1594"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388087 4644 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388122 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388135 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388148 4644 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388160 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbacde7-db93-44ed-801c-be230c2f1594-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: I0204 08:59:55.388171 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prfzn\" (UniqueName: \"kubernetes.io/projected/dbbacde7-db93-44ed-801c-be230c2f1594-kube-api-access-prfzn\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.412687 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.412869 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf4h576h649h7h5bdh5ch95h554h575hf6h54dhfh566h64dh556h5bbhf5hfdh645h597h684h59fh5bbh6fh548h7h94h5bh545hc5h57bh674q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh74n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-596fcd84d5-qzdf7_openstack(71bf7beb-de53-41c4-a39a-649731849784): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:55 crc kubenswrapper[4644]: E0204 08:59:55.423852 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-596fcd84d5-qzdf7" podUID="71bf7beb-de53-41c4-a39a-649731849784" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.013701 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkk6h" event={"ID":"dbbacde7-db93-44ed-801c-be230c2f1594","Type":"ContainerDied","Data":"36135450fa4fcbbcc372897e8711a2488887c0bc5f431e83c2cb7537faf97dfb"} Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.014060 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36135450fa4fcbbcc372897e8711a2488887c0bc5f431e83c2cb7537faf97dfb" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.013711 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkk6h" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.210019 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xkk6h"] Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.225808 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xkk6h"] Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.333183 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-47ns9"] Feb 04 08:59:56 crc kubenswrapper[4644]: E0204 08:59:56.333664 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbacde7-db93-44ed-801c-be230c2f1594" containerName="keystone-bootstrap" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.333682 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbacde7-db93-44ed-801c-be230c2f1594" containerName="keystone-bootstrap" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.333873 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbacde7-db93-44ed-801c-be230c2f1594" containerName="keystone-bootstrap" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.334492 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.339813 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.340214 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.340505 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4x467" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.340535 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.340666 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.354034 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-47ns9"] Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.513734 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.513886 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.513936 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.513959 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.514135 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sft69\" (UniqueName: \"kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.514248 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615512 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615586 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615617 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615660 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sft69\" (UniqueName: \"kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615701 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.615743 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.621356 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.621931 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.622552 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.624830 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.636184 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.637060 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sft69\" (UniqueName: \"kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69\") pod \"keystone-bootstrap-47ns9\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.667482 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-47ns9" Feb 04 08:59:56 crc kubenswrapper[4644]: I0204 08:59:56.672734 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbacde7-db93-44ed-801c-be230c2f1594" path="/var/lib/kubelet/pods/dbbacde7-db93-44ed-801c-be230c2f1594/volumes" Feb 04 08:59:56 crc kubenswrapper[4644]: E0204 08:59:56.994357 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 04 08:59:56 crc kubenswrapper[4644]: E0204 08:59:56.994533 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n62d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wlwtb_openstack(f6df95b1-d952-4b17-bb90-2a32fecb0a5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:56 crc kubenswrapper[4644]: E0204 08:59:56.995769 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wlwtb" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.031385 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54895dbb5-ssrzj" event={"ID":"41b7c593-91c6-4d69-af50-0e067e3fbea8","Type":"ContainerDied","Data":"db44f1d80bcdaa428c8874259ec2643ec1443c2d05bc6af35229b845df55b73b"} Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.031428 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db44f1d80bcdaa428c8874259ec2643ec1443c2d05bc6af35229b845df55b73b" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.032364 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-847d8d4b6f-jkqkx" event={"ID":"630413ac-d24f-412b-a8da-ec0d7f64ed1e","Type":"ContainerDied","Data":"87dd626b7c80e3f5ea0eebdfe0521606636d3ffb70659afb0699bd9698965e6e"} Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.032422 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87dd626b7c80e3f5ea0eebdfe0521606636d3ffb70659afb0699bd9698965e6e" Feb 04 08:59:57 crc kubenswrapper[4644]: E0204 08:59:57.036234 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wlwtb" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.075616 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.083404 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.230949 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs\") pod \"41b7c593-91c6-4d69-af50-0e067e3fbea8\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.231270 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data\") pod \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.231301 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs" (OuterVolumeSpecName: "logs") pod "41b7c593-91c6-4d69-af50-0e067e3fbea8" (UID: "41b7c593-91c6-4d69-af50-0e067e3fbea8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.231347 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9xvv\" (UniqueName: \"kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv\") pod \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.231409 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key\") pod \"41b7c593-91c6-4d69-af50-0e067e3fbea8\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.231996 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts\") pod \"41b7c593-91c6-4d69-af50-0e067e3fbea8\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232021 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts\") pod \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232055 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key\") pod \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232086 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data\") pod \"41b7c593-91c6-4d69-af50-0e067e3fbea8\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232147 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4g8v\" (UniqueName: \"kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v\") pod \"41b7c593-91c6-4d69-af50-0e067e3fbea8\" (UID: \"41b7c593-91c6-4d69-af50-0e067e3fbea8\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232193 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs\") pod \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\" (UID: \"630413ac-d24f-412b-a8da-ec0d7f64ed1e\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232205 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data" (OuterVolumeSpecName: "config-data") pod "630413ac-d24f-412b-a8da-ec0d7f64ed1e" (UID: "630413ac-d24f-412b-a8da-ec0d7f64ed1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232679 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b7c593-91c6-4d69-af50-0e067e3fbea8-logs\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.232692 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.233449 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts" (OuterVolumeSpecName: "scripts") pod "630413ac-d24f-412b-a8da-ec0d7f64ed1e" (UID: "630413ac-d24f-412b-a8da-ec0d7f64ed1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.234004 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs" (OuterVolumeSpecName: "logs") pod "630413ac-d24f-412b-a8da-ec0d7f64ed1e" (UID: "630413ac-d24f-412b-a8da-ec0d7f64ed1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.234615 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data" (OuterVolumeSpecName: "config-data") pod "41b7c593-91c6-4d69-af50-0e067e3fbea8" (UID: "41b7c593-91c6-4d69-af50-0e067e3fbea8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.235299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts" (OuterVolumeSpecName: "scripts") pod "41b7c593-91c6-4d69-af50-0e067e3fbea8" (UID: "41b7c593-91c6-4d69-af50-0e067e3fbea8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.238280 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "630413ac-d24f-412b-a8da-ec0d7f64ed1e" (UID: "630413ac-d24f-412b-a8da-ec0d7f64ed1e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.238267 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "41b7c593-91c6-4d69-af50-0e067e3fbea8" (UID: "41b7c593-91c6-4d69-af50-0e067e3fbea8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.241705 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v" (OuterVolumeSpecName: "kube-api-access-f4g8v") pod "41b7c593-91c6-4d69-af50-0e067e3fbea8" (UID: "41b7c593-91c6-4d69-af50-0e067e3fbea8"). InnerVolumeSpecName "kube-api-access-f4g8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.254055 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv" (OuterVolumeSpecName: "kube-api-access-c9xvv") pod "630413ac-d24f-412b-a8da-ec0d7f64ed1e" (UID: "630413ac-d24f-412b-a8da-ec0d7f64ed1e"). InnerVolumeSpecName "kube-api-access-c9xvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334157 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9xvv\" (UniqueName: \"kubernetes.io/projected/630413ac-d24f-412b-a8da-ec0d7f64ed1e-kube-api-access-c9xvv\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334193 4644 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41b7c593-91c6-4d69-af50-0e067e3fbea8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334203 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334211 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/630413ac-d24f-412b-a8da-ec0d7f64ed1e-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334219 4644 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/630413ac-d24f-412b-a8da-ec0d7f64ed1e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334228 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41b7c593-91c6-4d69-af50-0e067e3fbea8-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334238 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4g8v\" (UniqueName: \"kubernetes.io/projected/41b7c593-91c6-4d69-af50-0e067e3fbea8-kube-api-access-f4g8v\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.334246 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630413ac-d24f-412b-a8da-ec0d7f64ed1e-logs\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: E0204 08:59:57.587438 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 04 08:59:57 crc kubenswrapper[4644]: E0204 08:59:57.587576 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5lrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6tq9r_openstack(4623241a-c4dc-4646-9b03-aa89b84ca4b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 08:59:57 crc kubenswrapper[4644]: E0204 08:59:57.589619 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6tq9r" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.620730 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.630296 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741605 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc\") pod \"42473ac1-38e6-4651-9f0b-13df0950127d\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741691 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config\") pod \"42473ac1-38e6-4651-9f0b-13df0950127d\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741748 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vr6\" (UniqueName: \"kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6\") pod \"42473ac1-38e6-4651-9f0b-13df0950127d\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741768 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts\") pod \"71bf7beb-de53-41c4-a39a-649731849784\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741823 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh74n\" (UniqueName: \"kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n\") pod \"71bf7beb-de53-41c4-a39a-649731849784\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741869 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb\") pod \"42473ac1-38e6-4651-9f0b-13df0950127d\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741952 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key\") pod \"71bf7beb-de53-41c4-a39a-649731849784\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.741984 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data\") pod \"71bf7beb-de53-41c4-a39a-649731849784\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.742038 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs\") pod \"71bf7beb-de53-41c4-a39a-649731849784\" (UID: \"71bf7beb-de53-41c4-a39a-649731849784\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.742075 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb\") pod \"42473ac1-38e6-4651-9f0b-13df0950127d\" (UID: \"42473ac1-38e6-4651-9f0b-13df0950127d\") " Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.743955 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs" (OuterVolumeSpecName: "logs") pod "71bf7beb-de53-41c4-a39a-649731849784" (UID: "71bf7beb-de53-41c4-a39a-649731849784"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.744914 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data" (OuterVolumeSpecName: "config-data") pod "71bf7beb-de53-41c4-a39a-649731849784" (UID: "71bf7beb-de53-41c4-a39a-649731849784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.745169 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts" (OuterVolumeSpecName: "scripts") pod "71bf7beb-de53-41c4-a39a-649731849784" (UID: "71bf7beb-de53-41c4-a39a-649731849784"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.783011 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n" (OuterVolumeSpecName: "kube-api-access-dh74n") pod "71bf7beb-de53-41c4-a39a-649731849784" (UID: "71bf7beb-de53-41c4-a39a-649731849784"). InnerVolumeSpecName "kube-api-access-dh74n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.783164 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "71bf7beb-de53-41c4-a39a-649731849784" (UID: "71bf7beb-de53-41c4-a39a-649731849784"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.788862 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6" (OuterVolumeSpecName: "kube-api-access-v7vr6") pod "42473ac1-38e6-4651-9f0b-13df0950127d" (UID: "42473ac1-38e6-4651-9f0b-13df0950127d"). InnerVolumeSpecName "kube-api-access-v7vr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845886 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7vr6\" (UniqueName: \"kubernetes.io/projected/42473ac1-38e6-4651-9f0b-13df0950127d-kube-api-access-v7vr6\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845922 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845935 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh74n\" (UniqueName: \"kubernetes.io/projected/71bf7beb-de53-41c4-a39a-649731849784-kube-api-access-dh74n\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845946 4644 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71bf7beb-de53-41c4-a39a-649731849784-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845958 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71bf7beb-de53-41c4-a39a-649731849784-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.845971 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bf7beb-de53-41c4-a39a-649731849784-logs\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.863600 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42473ac1-38e6-4651-9f0b-13df0950127d" (UID: "42473ac1-38e6-4651-9f0b-13df0950127d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.909843 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42473ac1-38e6-4651-9f0b-13df0950127d" (UID: "42473ac1-38e6-4651-9f0b-13df0950127d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.912447 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config" (OuterVolumeSpecName: "config") pod "42473ac1-38e6-4651-9f0b-13df0950127d" (UID: "42473ac1-38e6-4651-9f0b-13df0950127d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.916197 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42473ac1-38e6-4651-9f0b-13df0950127d" (UID: "42473ac1-38e6-4651-9f0b-13df0950127d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.951234 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.956970 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.956999 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-config\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:57 crc kubenswrapper[4644]: I0204 08:59:57.959622 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42473ac1-38e6-4651-9f0b-13df0950127d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.054077 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" event={"ID":"42473ac1-38e6-4651-9f0b-13df0950127d","Type":"ContainerDied","Data":"f679674152936802fe47f9757721898a5da7d97927c9d4e277ad7ba7c74a66c8"} Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.054097 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.054128 4644 scope.go:117] "RemoveContainer" containerID="6279bc2710aafb11c8399cc2671efe9db0b87443ab72be2381b53e8052a336ba" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.055821 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54895dbb5-ssrzj" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.056189 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596fcd84d5-qzdf7" event={"ID":"71bf7beb-de53-41c4-a39a-649731849784","Type":"ContainerDied","Data":"dcdaef6aa094cded089c8f6690cb075dc8bba8f99bcfa2e84e8b463e99b38f96"} Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.056263 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596fcd84d5-qzdf7" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.068898 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847d8d4b6f-jkqkx" Feb 04 08:59:58 crc kubenswrapper[4644]: E0204 08:59:58.070907 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6tq9r" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.156982 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.164177 4644 scope.go:117] "RemoveContainer" containerID="44f8f8943055c7a7205644446d00e075a04a012288f773bc8ca383cef0d72a41" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.169991 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54895dbb5-ssrzj"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.177278 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.188621 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pdrwn"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.214827 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.220714 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-847d8d4b6f-jkqkx"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.234096 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.249723 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-596fcd84d5-qzdf7"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.367204 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-658bfcb544-88gj4"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.389478 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.443124 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 08:59:58 crc kubenswrapper[4644]: W0204 08:59:58.459684 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod676db25f_e0ad_48cc_af2c_88029d6eb80d.slice/crio-e963aa5585b1cfd7e656b8f2be97e31db035360ff6956981cc30cfabdfb586c5 WatchSource:0}: Error finding container e963aa5585b1cfd7e656b8f2be97e31db035360ff6956981cc30cfabdfb586c5: Status 404 returned error can't find the container with id e963aa5585b1cfd7e656b8f2be97e31db035360ff6956981cc30cfabdfb586c5 Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.531691 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.658947 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.683889 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b7c593-91c6-4d69-af50-0e067e3fbea8" path="/var/lib/kubelet/pods/41b7c593-91c6-4d69-af50-0e067e3fbea8/volumes" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.684254 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" path="/var/lib/kubelet/pods/42473ac1-38e6-4651-9f0b-13df0950127d/volumes" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.684936 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630413ac-d24f-412b-a8da-ec0d7f64ed1e" path="/var/lib/kubelet/pods/630413ac-d24f-412b-a8da-ec0d7f64ed1e/volumes" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.685381 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71bf7beb-de53-41c4-a39a-649731849784" path="/var/lib/kubelet/pods/71bf7beb-de53-41c4-a39a-649731849784/volumes" Feb 04 08:59:58 crc kubenswrapper[4644]: I0204 08:59:58.686288 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-47ns9"] Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.070529 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerStarted","Data":"4c860d6835611dfb22c0185348590c08fcab4a1380876d4cb8754ad26f0c10fd"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.082506 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerStarted","Data":"5884dd8335eb971ba80b017c80a1aac95b7916a46d83af9c1d0da691a592df56"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.090658 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658bfcb544-88gj4" event={"ID":"676db25f-e0ad-48cc-af2c-88029d6eb80d","Type":"ContainerStarted","Data":"e963aa5585b1cfd7e656b8f2be97e31db035360ff6956981cc30cfabdfb586c5"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.097083 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerStarted","Data":"ce72c827f741fcefd1233f9e8d05e264fc99d93efad366494e1e924c8d2a17dd"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.100515 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerStarted","Data":"8a2e5fd22e7f97bf61e75155bb89cfc69a8e13418a7202fab8440b576abbd81e"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.103587 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-47ns9" event={"ID":"d30388d1-fcaa-4680-ba0e-7cf6b071c356","Type":"ContainerStarted","Data":"2706aa2342f9aea54fa160b95e38c4506698fd9a10b92137e2b2993b35e705fd"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.103614 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-47ns9" event={"ID":"d30388d1-fcaa-4680-ba0e-7cf6b071c356","Type":"ContainerStarted","Data":"f17a34c9962dd493eba9b95576e0c02ed3324a948767d4ce9a2e540ef2a8efbc"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.106175 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" event={"ID":"5ee070eb-92be-4710-904a-7dab66158ee2","Type":"ContainerStarted","Data":"4096f85f0cc4f281b7b51e58d920fff429a9d74e68cf3e0e7bb70097b926c5ef"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.108293 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerStarted","Data":"9902315e301432e81952315f101cbfc5274886a6bf85f2627dfdf6ac0c4eda37"} Feb 04 08:59:59 crc kubenswrapper[4644]: I0204 08:59:59.139955 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-47ns9" podStartSLOduration=3.139927853 podStartE2EDuration="3.139927853s" podCreationTimestamp="2026-02-04 08:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 08:59:59.133123311 +0000 UTC m=+1109.173181066" watchObservedRunningTime="2026-02-04 08:59:59.139927853 +0000 UTC m=+1109.179985608" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.145839 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerStarted","Data":"f9aec4b2af46439459cc9cac38e106eca6b298773bcca5381e41aa41854d707b"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.164838 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6"] Feb 04 09:00:00 crc kubenswrapper[4644]: E0204 09:00:00.165347 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="init" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.165369 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="init" Feb 04 09:00:00 crc kubenswrapper[4644]: E0204 09:00:00.165405 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.165414 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.165631 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.166303 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.171926 4644 generic.go:334] "Generic (PLEG): container finished" podID="5ee070eb-92be-4710-904a-7dab66158ee2" containerID="f4636e6da88d5536a87b299e5120657b759c1aff4be05efcc9012fe69207f516" exitCode=0 Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.172040 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" event={"ID":"5ee070eb-92be-4710-904a-7dab66158ee2","Type":"ContainerDied","Data":"f4636e6da88d5536a87b299e5120657b759c1aff4be05efcc9012fe69207f516"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.175044 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.175217 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.180027 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerStarted","Data":"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.186665 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6"] Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.198384 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerStarted","Data":"1dab15a1be86303be92c3759c78a00e496aa99453479fc6bf25b4717598d3ad4"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.202679 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658bfcb544-88gj4" event={"ID":"676db25f-e0ad-48cc-af2c-88029d6eb80d","Type":"ContainerStarted","Data":"95510a00131773c32fd94f8ae9454628b093e68aa915de83d3362250d0551602"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.202713 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658bfcb544-88gj4" event={"ID":"676db25f-e0ad-48cc-af2c-88029d6eb80d","Type":"ContainerStarted","Data":"272ad5a7273bf3e8dce58fec2ccb47f692010b9e7be2a82ea9762aff391f668a"} Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.240490 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-658bfcb544-88gj4" podStartSLOduration=43.347161806 podStartE2EDuration="44.24046945s" podCreationTimestamp="2026-02-04 08:59:16 +0000 UTC" firstStartedPulling="2026-02-04 08:59:58.465218764 +0000 UTC m=+1108.505276519" lastFinishedPulling="2026-02-04 08:59:59.358526408 +0000 UTC m=+1109.398584163" observedRunningTime="2026-02-04 09:00:00.232743824 +0000 UTC m=+1110.272801599" watchObservedRunningTime="2026-02-04 09:00:00.24046945 +0000 UTC m=+1110.280527205" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.310850 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98c2w\" (UniqueName: \"kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.310941 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.311078 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.412745 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.413189 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98c2w\" (UniqueName: \"kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.413250 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.413948 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.418685 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.432667 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98c2w\" (UniqueName: \"kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w\") pod \"collect-profiles-29503260-wpjf6\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.524798 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.703122 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fb9db66f6-v84nx" podStartSLOduration=43.655097908 podStartE2EDuration="44.703101247s" podCreationTimestamp="2026-02-04 08:59:16 +0000 UTC" firstStartedPulling="2026-02-04 08:59:58.46469259 +0000 UTC m=+1108.504750345" lastFinishedPulling="2026-02-04 08:59:59.512695929 +0000 UTC m=+1109.552753684" observedRunningTime="2026-02-04 09:00:00.262007012 +0000 UTC m=+1110.302064777" watchObservedRunningTime="2026-02-04 09:00:00.703101247 +0000 UTC m=+1110.743159002" Feb 04 09:00:00 crc kubenswrapper[4644]: I0204 09:00:00.942898 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pdrwn" podUID="42473ac1-38e6-4651-9f0b-13df0950127d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.016075 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6"] Feb 04 09:00:01 crc kubenswrapper[4644]: W0204 09:00:01.020096 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod774fde1c_3407_4779_8e1e_e884b86ea91e.slice/crio-a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50 WatchSource:0}: Error finding container a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50: Status 404 returned error can't find the container with id a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50 Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.209435 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerStarted","Data":"fa849767dc0fad0ce5a803017dabf754eac706e29f9879429800565872437189"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.211913 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerStarted","Data":"3e7cb648ed4def50b2be0165f73cfc8b09b1e67338311ac206fe96a899ad796d"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.212011 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-log" containerID="cri-o://f9aec4b2af46439459cc9cac38e106eca6b298773bcca5381e41aa41854d707b" gracePeriod=30 Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.212034 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-httpd" containerID="cri-o://3e7cb648ed4def50b2be0165f73cfc8b09b1e67338311ac206fe96a899ad796d" gracePeriod=30 Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.214039 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" event={"ID":"5ee070eb-92be-4710-904a-7dab66158ee2","Type":"ContainerStarted","Data":"f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.214937 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.218489 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerStarted","Data":"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.218585 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-log" containerID="cri-o://6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35" gracePeriod=30 Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.218612 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-httpd" containerID="cri-o://d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2" gracePeriod=30 Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.222411 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" event={"ID":"774fde1c-3407-4779-8e1e-e884b86ea91e","Type":"ContainerStarted","Data":"0995d52beecf8d50e6a354b08deb4243956220224842c108f1648105e4b8e5a5"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.222455 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" event={"ID":"774fde1c-3407-4779-8e1e-e884b86ea91e","Type":"ContainerStarted","Data":"a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.225183 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerStarted","Data":"8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea"} Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.276631 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" podStartSLOduration=21.276610184 podStartE2EDuration="21.276610184s" podCreationTimestamp="2026-02-04 08:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:01.272671479 +0000 UTC m=+1111.312729234" watchObservedRunningTime="2026-02-04 09:00:01.276610184 +0000 UTC m=+1111.316667939" Feb 04 09:00:01 crc kubenswrapper[4644]: I0204 09:00:01.276799 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.276795259 podStartE2EDuration="22.276795259s" podCreationTimestamp="2026-02-04 08:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:01.249454501 +0000 UTC m=+1111.289512256" watchObservedRunningTime="2026-02-04 09:00:01.276795259 +0000 UTC m=+1111.316853014" Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.233706 4644 generic.go:334] "Generic (PLEG): container finished" podID="774fde1c-3407-4779-8e1e-e884b86ea91e" containerID="0995d52beecf8d50e6a354b08deb4243956220224842c108f1648105e4b8e5a5" exitCode=0 Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.233926 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" event={"ID":"774fde1c-3407-4779-8e1e-e884b86ea91e","Type":"ContainerDied","Data":"0995d52beecf8d50e6a354b08deb4243956220224842c108f1648105e4b8e5a5"} Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.237289 4644 generic.go:334] "Generic (PLEG): container finished" podID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerID="3e7cb648ed4def50b2be0165f73cfc8b09b1e67338311ac206fe96a899ad796d" exitCode=0 Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.237316 4644 generic.go:334] "Generic (PLEG): container finished" podID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerID="f9aec4b2af46439459cc9cac38e106eca6b298773bcca5381e41aa41854d707b" exitCode=143 Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.237319 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerDied","Data":"3e7cb648ed4def50b2be0165f73cfc8b09b1e67338311ac206fe96a899ad796d"} Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.237369 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerDied","Data":"f9aec4b2af46439459cc9cac38e106eca6b298773bcca5381e41aa41854d707b"} Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.239927 4644 generic.go:334] "Generic (PLEG): container finished" podID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerID="6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35" exitCode=143 Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.240123 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerDied","Data":"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35"} Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.268742 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.268722867 podStartE2EDuration="22.268722867s" podCreationTimestamp="2026-02-04 08:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:01.30615504 +0000 UTC m=+1111.346212815" watchObservedRunningTime="2026-02-04 09:00:02.268722867 +0000 UTC m=+1112.308780622" Feb 04 09:00:02 crc kubenswrapper[4644]: I0204 09:00:02.937571 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071433 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071498 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071527 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071591 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071708 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071746 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.071818 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5rtj\" (UniqueName: \"kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj\") pod \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\" (UID: \"d1c1ea93-5a2c-4128-9838-60a7ac2c2632\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.074391 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs" (OuterVolumeSpecName: "logs") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.075048 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.090566 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj" (OuterVolumeSpecName: "kube-api-access-t5rtj") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "kube-api-access-t5rtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.101524 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.104841 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts" (OuterVolumeSpecName: "scripts") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.163597 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data" (OuterVolumeSpecName: "config-data") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176116 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176167 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176176 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5rtj\" (UniqueName: \"kubernetes.io/projected/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-kube-api-access-t5rtj\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176186 4644 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176238 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.176252 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.178494 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1c1ea93-5a2c-4128-9838-60a7ac2c2632" (UID: "d1c1ea93-5a2c-4128-9838-60a7ac2c2632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.204908 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.250900 4644 generic.go:334] "Generic (PLEG): container finished" podID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerID="d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2" exitCode=0 Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.250961 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.250976 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerDied","Data":"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2"} Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.251031 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1c1ea93-5a2c-4128-9838-60a7ac2c2632","Type":"ContainerDied","Data":"9902315e301432e81952315f101cbfc5274886a6bf85f2627dfdf6ac0c4eda37"} Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.251050 4644 scope.go:117] "RemoveContainer" containerID="d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.277728 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.277758 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c1ea93-5a2c-4128-9838-60a7ac2c2632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.288832 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.348014 4644 scope.go:117] "RemoveContainer" containerID="6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.363685 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.385712 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386180 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386221 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386316 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386403 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86vz4\" (UniqueName: \"kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386429 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386462 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.386556 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts\") pod \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\" (UID: \"10f83fae-9ad9-4df3-ad50-06e4a83a95a4\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.389303 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs" (OuterVolumeSpecName: "logs") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.390360 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.392711 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.392869 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4" (OuterVolumeSpecName: "kube-api-access-86vz4") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "kube-api-access-86vz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.392885 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts" (OuterVolumeSpecName: "scripts") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.427584 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.427697 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.428992 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429013 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.429045 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429052 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.429058 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429064 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.429078 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429084 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429702 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429788 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429836 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-log" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.429851 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" containerName="glance-httpd" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.431644 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.434151 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.434293 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.434483 4644 scope.go:117] "RemoveContainer" containerID="d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2" Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.464686 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2\": container with ID starting with d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2 not found: ID does not exist" containerID="d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.464727 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2"} err="failed to get container status \"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2\": rpc error: code = NotFound desc = could not find container \"d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2\": container with ID starting with d453a743793a6267d674e2f0ae269d7704cdbb41e72c03b2a715285e141327c2 not found: ID does not exist" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.464754 4644 scope.go:117] "RemoveContainer" containerID="6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35" Feb 04 09:00:03 crc kubenswrapper[4644]: E0204 09:00:03.465374 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35\": container with ID starting with 6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35 not found: ID does not exist" containerID="6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.465396 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35"} err="failed to get container status \"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35\": rpc error: code = NotFound desc = could not find container \"6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35\": container with ID starting with 6b9c3aa4b4cdc5e2396a52037e92aa4db0d44edae6b473ffacf8c51004938b35 not found: ID does not exist" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.487103 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492368 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492415 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492425 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492435 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86vz4\" (UniqueName: \"kubernetes.io/projected/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-kube-api-access-86vz4\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492444 4644 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.492452 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.510092 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data" (OuterVolumeSpecName: "config-data") pod "10f83fae-9ad9-4df3-ad50-06e4a83a95a4" (UID: "10f83fae-9ad9-4df3-ad50-06e4a83a95a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.522869 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594222 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594263 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kfm9\" (UniqueName: \"kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594304 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594348 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594395 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594448 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594471 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594517 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f83fae-9ad9-4df3-ad50-06e4a83a95a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.594529 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.626274 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.697900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume\") pod \"774fde1c-3407-4779-8e1e-e884b86ea91e\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.697962 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98c2w\" (UniqueName: \"kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w\") pod \"774fde1c-3407-4779-8e1e-e884b86ea91e\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698017 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume\") pod \"774fde1c-3407-4779-8e1e-e884b86ea91e\" (UID: \"774fde1c-3407-4779-8e1e-e884b86ea91e\") " Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698251 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698298 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698318 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698361 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698411 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698432 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kfm9\" (UniqueName: \"kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698467 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698503 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698738 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.698955 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume" (OuterVolumeSpecName: "config-volume") pod "774fde1c-3407-4779-8e1e-e884b86ea91e" (UID: "774fde1c-3407-4779-8e1e-e884b86ea91e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.699080 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.699298 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.709248 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.709693 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.710157 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "774fde1c-3407-4779-8e1e-e884b86ea91e" (UID: "774fde1c-3407-4779-8e1e-e884b86ea91e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.710863 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.711666 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w" (OuterVolumeSpecName: "kube-api-access-98c2w") pod "774fde1c-3407-4779-8e1e-e884b86ea91e" (UID: "774fde1c-3407-4779-8e1e-e884b86ea91e"). InnerVolumeSpecName "kube-api-access-98c2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.731000 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.731638 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kfm9\" (UniqueName: \"kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.771753 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.802868 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774fde1c-3407-4779-8e1e-e884b86ea91e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.803279 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774fde1c-3407-4779-8e1e-e884b86ea91e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:03 crc kubenswrapper[4644]: I0204 09:00:03.803381 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98c2w\" (UniqueName: \"kubernetes.io/projected/774fde1c-3407-4779-8e1e-e884b86ea91e-kube-api-access-98c2w\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.063853 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.286794 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10f83fae-9ad9-4df3-ad50-06e4a83a95a4","Type":"ContainerDied","Data":"ce72c827f741fcefd1233f9e8d05e264fc99d93efad366494e1e924c8d2a17dd"} Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.287049 4644 scope.go:117] "RemoveContainer" containerID="3e7cb648ed4def50b2be0165f73cfc8b09b1e67338311ac206fe96a899ad796d" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.287205 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.354297 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.359644 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" event={"ID":"774fde1c-3407-4779-8e1e-e884b86ea91e","Type":"ContainerDied","Data":"a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50"} Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.359682 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ee95a6ccda81533e01ae1309a7f18655286f3799906e62a3d8dcd740126a50" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.359778 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.385977 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.395170 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:00:04 crc kubenswrapper[4644]: E0204 09:00:04.395780 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774fde1c-3407-4779-8e1e-e884b86ea91e" containerName="collect-profiles" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.395800 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="774fde1c-3407-4779-8e1e-e884b86ea91e" containerName="collect-profiles" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.397048 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="774fde1c-3407-4779-8e1e-e884b86ea91e" containerName="collect-profiles" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.398576 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.404686 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.419909 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.420162 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.461380 4644 scope.go:117] "RemoveContainer" containerID="f9aec4b2af46439459cc9cac38e106eca6b298773bcca5381e41aa41854d707b" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.525943 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526005 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526042 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526084 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526116 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526152 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526174 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.526237 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgfs\" (UniqueName: \"kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.542321 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.641581 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgfs\" (UniqueName: \"kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.641917 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.641944 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.641985 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.642040 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.642059 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.642105 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.642131 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.643163 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.643687 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.643814 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.650608 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.653230 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.655052 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.661033 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgfs\" (UniqueName: \"kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.664605 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.688290 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f83fae-9ad9-4df3-ad50-06e4a83a95a4" path="/var/lib/kubelet/pods/10f83fae-9ad9-4df3-ad50-06e4a83a95a4/volumes" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.689094 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c1ea93-5a2c-4128-9838-60a7ac2c2632" path="/var/lib/kubelet/pods/d1c1ea93-5a2c-4128-9838-60a7ac2c2632/volumes" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.700628 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.741684 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.792837 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:00:04 crc kubenswrapper[4644]: I0204 09:00:04.794277 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:00:05 crc kubenswrapper[4644]: I0204 09:00:05.302830 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:00:05 crc kubenswrapper[4644]: W0204 09:00:05.314200 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1590cd61_9cd4_479e_9ba8_d323890eecc0.slice/crio-317745156bbebf7f7212f2187a6e1940aeb5073832646b199ed801ad9d19ee1a WatchSource:0}: Error finding container 317745156bbebf7f7212f2187a6e1940aeb5073832646b199ed801ad9d19ee1a: Status 404 returned error can't find the container with id 317745156bbebf7f7212f2187a6e1940aeb5073832646b199ed801ad9d19ee1a Feb 04 09:00:05 crc kubenswrapper[4644]: I0204 09:00:05.387978 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerStarted","Data":"b0a601c8952550240f5fe1c6167df25d68c6543a1cbc4347b12d4db6219af241"} Feb 04 09:00:05 crc kubenswrapper[4644]: I0204 09:00:05.388211 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerStarted","Data":"6d619b6dae6b0cdb6da7ec38cee601e7e206204b53204de9e29827c464e8e341"} Feb 04 09:00:05 crc kubenswrapper[4644]: I0204 09:00:05.408390 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerStarted","Data":"317745156bbebf7f7212f2187a6e1940aeb5073832646b199ed801ad9d19ee1a"} Feb 04 09:00:05 crc kubenswrapper[4644]: I0204 09:00:05.877926 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:05 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:05 crc kubenswrapper[4644]: > Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.418783 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerStarted","Data":"f9608a5e50f74d623dd19b8d3c553a47020128c3f555e1ab6c2d0bf05a114380"} Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.425369 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerStarted","Data":"c0076b247a60824499e73ef429b4bc6df4a0f9f582e714949ebeefbab1dd57db"} Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.460436 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.460414997 podStartE2EDuration="3.460414997s" podCreationTimestamp="2026-02-04 09:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:06.444403511 +0000 UTC m=+1116.484461266" watchObservedRunningTime="2026-02-04 09:00:06.460414997 +0000 UTC m=+1116.500472752" Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.613075 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.613467 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:00:06 crc kubenswrapper[4644]: E0204 09:00:06.685573 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-nlr7w" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.724372 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:00:06 crc kubenswrapper[4644]: I0204 09:00:06.736764 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:00:07 crc kubenswrapper[4644]: I0204 09:00:07.471672 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerStarted","Data":"c88267f2681a46d2aa4025afe5a03dafedb80238e4041bd520cda47a5fd6a8ba"} Feb 04 09:00:07 crc kubenswrapper[4644]: I0204 09:00:07.474199 4644 generic.go:334] "Generic (PLEG): container finished" podID="d30388d1-fcaa-4680-ba0e-7cf6b071c356" containerID="2706aa2342f9aea54fa160b95e38c4506698fd9a10b92137e2b2993b35e705fd" exitCode=0 Feb 04 09:00:07 crc kubenswrapper[4644]: I0204 09:00:07.474412 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-47ns9" event={"ID":"d30388d1-fcaa-4680-ba0e-7cf6b071c356","Type":"ContainerDied","Data":"2706aa2342f9aea54fa160b95e38c4506698fd9a10b92137e2b2993b35e705fd"} Feb 04 09:00:07 crc kubenswrapper[4644]: I0204 09:00:07.522100 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.5220832399999997 podStartE2EDuration="3.52208324s" podCreationTimestamp="2026-02-04 09:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:07.496764167 +0000 UTC m=+1117.536821922" watchObservedRunningTime="2026-02-04 09:00:07.52208324 +0000 UTC m=+1117.562140995" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.756860 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-47ns9" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851042 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851147 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851174 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851232 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sft69\" (UniqueName: \"kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851274 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.851376 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle\") pod \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\" (UID: \"d30388d1-fcaa-4680-ba0e-7cf6b071c356\") " Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.853904 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.856700 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69" (OuterVolumeSpecName: "kube-api-access-sft69") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "kube-api-access-sft69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.858239 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts" (OuterVolumeSpecName: "scripts") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.863975 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.910734 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data" (OuterVolumeSpecName: "config-data") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.911253 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d30388d1-fcaa-4680-ba0e-7cf6b071c356" (UID: "d30388d1-fcaa-4680-ba0e-7cf6b071c356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953533 4644 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953573 4644 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953583 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953594 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sft69\" (UniqueName: \"kubernetes.io/projected/d30388d1-fcaa-4680-ba0e-7cf6b071c356-kube-api-access-sft69\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953605 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:09 crc kubenswrapper[4644]: I0204 09:00:09.953613 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30388d1-fcaa-4680-ba0e-7cf6b071c356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.474457 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.549564 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.552899 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="dnsmasq-dns" containerID="cri-o://98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889" gracePeriod=10 Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.567176 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerStarted","Data":"b8d2f9d0d5eba856866eb0223eb8fe1bc3726f9794663cd8255d3d6165529fc6"} Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.587122 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-47ns9" event={"ID":"d30388d1-fcaa-4680-ba0e-7cf6b071c356","Type":"ContainerDied","Data":"f17a34c9962dd493eba9b95576e0c02ed3324a948767d4ce9a2e540ef2a8efbc"} Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.587153 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17a34c9962dd493eba9b95576e0c02ed3324a948767d4ce9a2e540ef2a8efbc" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.587208 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-47ns9" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.941536 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-766cbd9f4b-bj8dc"] Feb 04 09:00:10 crc kubenswrapper[4644]: E0204 09:00:10.958280 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30388d1-fcaa-4680-ba0e-7cf6b071c356" containerName="keystone-bootstrap" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.958306 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30388d1-fcaa-4680-ba0e-7cf6b071c356" containerName="keystone-bootstrap" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.958539 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30388d1-fcaa-4680-ba0e-7cf6b071c356" containerName="keystone-bootstrap" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.959165 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766cbd9f4b-bj8dc"] Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.959250 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.967885 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.968056 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4x467" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.968091 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.968090 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.968197 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 04 09:00:10 crc kubenswrapper[4644]: I0204 09:00:10.968225 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.083886 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-fernet-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.083956 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-credential-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084008 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-scripts\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084053 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-public-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084094 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-internal-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084116 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-combined-ca-bundle\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084135 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-config-data\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.084153 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxx2w\" (UniqueName: \"kubernetes.io/projected/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-kube-api-access-fxx2w\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.185915 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-public-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186021 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-internal-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186088 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-combined-ca-bundle\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186114 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-config-data\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186141 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxx2w\" (UniqueName: \"kubernetes.io/projected/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-kube-api-access-fxx2w\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186178 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-fernet-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-credential-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.186290 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-scripts\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.191395 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-scripts\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.197095 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-fernet-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.198918 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-combined-ca-bundle\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.199929 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-config-data\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.200365 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-internal-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.207710 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-credential-keys\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.219172 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-public-tls-certs\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.228287 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxx2w\" (UniqueName: \"kubernetes.io/projected/1fa2a049-f943-48c9-b4c2-09c2cd5decc2-kube-api-access-fxx2w\") pod \"keystone-766cbd9f4b-bj8dc\" (UID: \"1fa2a049-f943-48c9-b4c2-09c2cd5decc2\") " pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.287675 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.292132 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390714 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390776 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390821 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdsq4\" (UniqueName: \"kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390851 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390887 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.390986 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb\") pod \"43bcbb21-d5a1-440b-9554-306c249422f3\" (UID: \"43bcbb21-d5a1-440b-9554-306c249422f3\") " Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.496664 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4" (OuterVolumeSpecName: "kube-api-access-rdsq4") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "kube-api-access-rdsq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.596939 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.598956 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdsq4\" (UniqueName: \"kubernetes.io/projected/43bcbb21-d5a1-440b-9554-306c249422f3-kube-api-access-rdsq4\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.598973 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.660005 4644 generic.go:334] "Generic (PLEG): container finished" podID="43bcbb21-d5a1-440b-9554-306c249422f3" containerID="98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889" exitCode=0 Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.660105 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" event={"ID":"43bcbb21-d5a1-440b-9554-306c249422f3","Type":"ContainerDied","Data":"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889"} Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.660135 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" event={"ID":"43bcbb21-d5a1-440b-9554-306c249422f3","Type":"ContainerDied","Data":"55efd17c74bf3846f47886401fb42d1c0ddd4a0450cb84727301c58033a3289a"} Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.660154 4644 scope.go:117] "RemoveContainer" containerID="98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.660174 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-79mcx" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.682543 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wlwtb" event={"ID":"f6df95b1-d952-4b17-bb90-2a32fecb0a5b","Type":"ContainerStarted","Data":"334bb5ebaa42cec7c5168838f937cf076e611ae95464a97820900a1878fbd00d"} Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.733844 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wlwtb" podStartSLOduration=4.543301089 podStartE2EDuration="1m5.733823803s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="2026-02-04 08:59:09.001466383 +0000 UTC m=+1059.041524138" lastFinishedPulling="2026-02-04 09:00:10.191989077 +0000 UTC m=+1120.232046852" observedRunningTime="2026-02-04 09:00:11.733642218 +0000 UTC m=+1121.773699973" watchObservedRunningTime="2026-02-04 09:00:11.733823803 +0000 UTC m=+1121.773881558" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.737779 4644 scope.go:117] "RemoveContainer" containerID="50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.764520 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.786797 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config" (OuterVolumeSpecName: "config") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.797493 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.802473 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.802502 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.802511 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.804984 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43bcbb21-d5a1-440b-9554-306c249422f3" (UID: "43bcbb21-d5a1-440b-9554-306c249422f3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.902915 4644 scope.go:117] "RemoveContainer" containerID="98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.903793 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43bcbb21-d5a1-440b-9554-306c249422f3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:11 crc kubenswrapper[4644]: E0204 09:00:11.913567 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889\": container with ID starting with 98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889 not found: ID does not exist" containerID="98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.913609 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889"} err="failed to get container status \"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889\": rpc error: code = NotFound desc = could not find container \"98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889\": container with ID starting with 98b7d1a96fd469bed3bc2e3646b8bd88388acb7f5143159087c4552e7735a889 not found: ID does not exist" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.913635 4644 scope.go:117] "RemoveContainer" containerID="50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2" Feb 04 09:00:11 crc kubenswrapper[4644]: E0204 09:00:11.914639 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2\": container with ID starting with 50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2 not found: ID does not exist" containerID="50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2" Feb 04 09:00:11 crc kubenswrapper[4644]: I0204 09:00:11.914659 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2"} err="failed to get container status \"50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2\": rpc error: code = NotFound desc = could not find container \"50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2\": container with ID starting with 50ab352c69872eca92da3a99ac3b8608c02459622903fa5eae0325dd1df621a2 not found: ID does not exist" Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.014608 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.022308 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-79mcx"] Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.238125 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766cbd9f4b-bj8dc"] Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.669364 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" path="/var/lib/kubelet/pods/43bcbb21-d5a1-440b-9554-306c249422f3/volumes" Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.692391 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tq9r" event={"ID":"4623241a-c4dc-4646-9b03-aa89b84ca4b1","Type":"ContainerStarted","Data":"4a88d8a2e5ca1806dab14a6ce567ef62dc89f54c07b4be8dc54d2a78f4daf780"} Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.698752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766cbd9f4b-bj8dc" event={"ID":"1fa2a049-f943-48c9-b4c2-09c2cd5decc2","Type":"ContainerStarted","Data":"bf89a8838346ec935c1aedc767925b3642866ff150175a6765f395a2db7a14bf"} Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.698793 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766cbd9f4b-bj8dc" event={"ID":"1fa2a049-f943-48c9-b4c2-09c2cd5decc2","Type":"ContainerStarted","Data":"14cce37ad7ee862a7fc768431e782d51916958c33fae91896b5e68004482c3ee"} Feb 04 09:00:12 crc kubenswrapper[4644]: I0204 09:00:12.717458 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6tq9r" podStartSLOduration=4.262017289 podStartE2EDuration="1m6.717442451s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="2026-02-04 08:59:08.836943798 +0000 UTC m=+1058.877001553" lastFinishedPulling="2026-02-04 09:00:11.29236896 +0000 UTC m=+1121.332426715" observedRunningTime="2026-02-04 09:00:12.710598348 +0000 UTC m=+1122.750656103" watchObservedRunningTime="2026-02-04 09:00:12.717442451 +0000 UTC m=+1122.757500206" Feb 04 09:00:13 crc kubenswrapper[4644]: I0204 09:00:13.706496 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:13 crc kubenswrapper[4644]: I0204 09:00:13.751265 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-766cbd9f4b-bj8dc" podStartSLOduration=3.751242852 podStartE2EDuration="3.751242852s" podCreationTimestamp="2026-02-04 09:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:13.743909497 +0000 UTC m=+1123.783967262" watchObservedRunningTime="2026-02-04 09:00:13.751242852 +0000 UTC m=+1123.791300607" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.064501 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.064554 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.096207 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.113910 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.724686 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.725718 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.743603 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.744649 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.798474 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 09:00:14 crc kubenswrapper[4644]: I0204 09:00:14.813568 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 09:00:15 crc kubenswrapper[4644]: I0204 09:00:15.731008 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 09:00:15 crc kubenswrapper[4644]: I0204 09:00:15.731076 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 09:00:15 crc kubenswrapper[4644]: I0204 09:00:15.848670 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:15 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:15 crc kubenswrapper[4644]: > Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.615252 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.676258 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.743141 4644 generic.go:334] "Generic (PLEG): container finished" podID="bcc65018-86ae-4c36-bf21-3849c09ee648" containerID="e9b4afe63498807468ab2afa12176fe6e890d0486f5c4adff425ef21e97a8a10" exitCode=0 Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.744132 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.745298 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:00:16 crc kubenswrapper[4644]: I0204 09:00:16.744020 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9pl45" event={"ID":"bcc65018-86ae-4c36-bf21-3849c09ee648","Type":"ContainerDied","Data":"e9b4afe63498807468ab2afa12176fe6e890d0486f5c4adff425ef21e97a8a10"} Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.232111 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9pl45" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.418177 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config\") pod \"bcc65018-86ae-4c36-bf21-3849c09ee648\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.418268 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9lw\" (UniqueName: \"kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw\") pod \"bcc65018-86ae-4c36-bf21-3849c09ee648\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.418371 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle\") pod \"bcc65018-86ae-4c36-bf21-3849c09ee648\" (UID: \"bcc65018-86ae-4c36-bf21-3849c09ee648\") " Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.477517 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcc65018-86ae-4c36-bf21-3849c09ee648" (UID: "bcc65018-86ae-4c36-bf21-3849c09ee648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.477739 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw" (OuterVolumeSpecName: "kube-api-access-zm9lw") pod "bcc65018-86ae-4c36-bf21-3849c09ee648" (UID: "bcc65018-86ae-4c36-bf21-3849c09ee648"). InnerVolumeSpecName "kube-api-access-zm9lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.477924 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config" (OuterVolumeSpecName: "config") pod "bcc65018-86ae-4c36-bf21-3849c09ee648" (UID: "bcc65018-86ae-4c36-bf21-3849c09ee648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.520249 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.520289 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcc65018-86ae-4c36-bf21-3849c09ee648-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.520313 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9lw\" (UniqueName: \"kubernetes.io/projected/bcc65018-86ae-4c36-bf21-3849c09ee648-kube-api-access-zm9lw\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.825600 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9pl45" event={"ID":"bcc65018-86ae-4c36-bf21-3849c09ee648","Type":"ContainerDied","Data":"71702e314a7f41021e63c8522e6243fd7ba33d7f2bdec84f2b8187cbc42b9b65"} Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.825919 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71702e314a7f41021e63c8522e6243fd7ba33d7f2bdec84f2b8187cbc42b9b65" Feb 04 09:00:22 crc kubenswrapper[4644]: I0204 09:00:22.826063 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9pl45" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.520504 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:23 crc kubenswrapper[4644]: E0204 09:00:23.520888 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc65018-86ae-4c36-bf21-3849c09ee648" containerName="neutron-db-sync" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.520905 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc65018-86ae-4c36-bf21-3849c09ee648" containerName="neutron-db-sync" Feb 04 09:00:23 crc kubenswrapper[4644]: E0204 09:00:23.520938 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="dnsmasq-dns" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.520947 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="dnsmasq-dns" Feb 04 09:00:23 crc kubenswrapper[4644]: E0204 09:00:23.520962 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="init" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.520968 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="init" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.521163 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc65018-86ae-4c36-bf21-3849c09ee648" containerName="neutron-db-sync" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.521214 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="43bcbb21-d5a1-440b-9554-306c249422f3" containerName="dnsmasq-dns" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.527612 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.553841 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639067 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639126 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639201 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639225 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639244 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.639297 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6wh\" (UniqueName: \"kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.655814 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.666355 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.670086 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.676232 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l25sz" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.676285 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.676354 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.676616 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.741029 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.741099 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.741133 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.742153 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.742428 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.742496 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6wh\" (UniqueName: \"kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.742694 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.742776 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.743575 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.743637 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.743795 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.777365 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6wh\" (UniqueName: \"kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh\") pod \"dnsmasq-dns-6b7b667979-2b2sv\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.844439 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.844507 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.844579 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.844810 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.844932 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflsz\" (UniqueName: \"kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.850261 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.946587 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.946708 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.946746 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflsz\" (UniqueName: \"kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.946794 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.946828 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.959261 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.964969 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflsz\" (UniqueName: \"kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.968843 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.971297 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:23 crc kubenswrapper[4644]: I0204 09:00:23.978643 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config\") pod \"neutron-5df75db7c8-8lxlc\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:24 crc kubenswrapper[4644]: I0204 09:00:24.009393 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.359762 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:25 crc kubenswrapper[4644]: W0204 09:00:25.360950 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29d48c89_782f_41fd_9fb8_f3cd6edd0a4c.slice/crio-ac57d5df5bfca740e57eb3cd254b1cb84c577b154b8bb95407b6cb0241b60cdc WatchSource:0}: Error finding container ac57d5df5bfca740e57eb3cd254b1cb84c577b154b8bb95407b6cb0241b60cdc: Status 404 returned error can't find the container with id ac57d5df5bfca740e57eb3cd254b1cb84c577b154b8bb95407b6cb0241b60cdc Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.624272 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:00:25 crc kubenswrapper[4644]: W0204 09:00:25.633364 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cbe3b5d_7379_447e_acac_6f7306ce230f.slice/crio-eb53fc54c789f26045569ab706f7c08d2a759b5c392eea86b7f5e34d46cf936a WatchSource:0}: Error finding container eb53fc54c789f26045569ab706f7c08d2a759b5c392eea86b7f5e34d46cf936a: Status 404 returned error can't find the container with id eb53fc54c789f26045569ab706f7c08d2a759b5c392eea86b7f5e34d46cf936a Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856115 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerStarted","Data":"d8c455e6f38e0c6cccedfd565d04298b816fbc51955721c6d55de9cfa513b021"} Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856257 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-central-agent" containerID="cri-o://5884dd8335eb971ba80b017c80a1aac95b7916a46d83af9c1d0da691a592df56" gracePeriod=30 Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856337 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="sg-core" containerID="cri-o://b8d2f9d0d5eba856866eb0223eb8fe1bc3726f9794663cd8255d3d6165529fc6" gracePeriod=30 Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856361 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-notification-agent" containerID="cri-o://fa849767dc0fad0ce5a803017dabf754eac706e29f9879429800565872437189" gracePeriod=30 Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856429 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.856552 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="proxy-httpd" containerID="cri-o://d8c455e6f38e0c6cccedfd565d04298b816fbc51955721c6d55de9cfa513b021" gracePeriod=30 Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.858673 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nlr7w" event={"ID":"c6677efd-b2e4-45b7-8703-3a189d87723d","Type":"ContainerStarted","Data":"30570130a3239b29d0b7cd583c4eefdd2b82a5f8b87fc328a416e2332a72b513"} Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.863853 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:25 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:25 crc kubenswrapper[4644]: > Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.868133 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerStarted","Data":"01cc4d7faf1337a35ddc34ca19428c3bef327c8e25e15608a360ec6ea3ef6eec"} Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.868179 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerStarted","Data":"ac57d5df5bfca740e57eb3cd254b1cb84c577b154b8bb95407b6cb0241b60cdc"} Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.871688 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerStarted","Data":"eb53fc54c789f26045569ab706f7c08d2a759b5c392eea86b7f5e34d46cf936a"} Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.933319 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.114592173 podStartE2EDuration="1m19.933298806s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="2026-02-04 08:59:09.16719602 +0000 UTC m=+1059.207253775" lastFinishedPulling="2026-02-04 09:00:24.985902663 +0000 UTC m=+1135.025960408" observedRunningTime="2026-02-04 09:00:25.929244109 +0000 UTC m=+1135.969301864" watchObservedRunningTime="2026-02-04 09:00:25.933298806 +0000 UTC m=+1135.973356571" Feb 04 09:00:25 crc kubenswrapper[4644]: I0204 09:00:25.966639 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nlr7w" podStartSLOduration=3.831865693 podStartE2EDuration="1m19.966615772s" podCreationTimestamp="2026-02-04 08:59:06 +0000 UTC" firstStartedPulling="2026-02-04 08:59:08.833752501 +0000 UTC m=+1058.873810256" lastFinishedPulling="2026-02-04 09:00:24.96850258 +0000 UTC m=+1135.008560335" observedRunningTime="2026-02-04 09:00:25.956223686 +0000 UTC m=+1135.996281441" watchObservedRunningTime="2026-02-04 09:00:25.966615772 +0000 UTC m=+1136.006673527" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.613790 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.674751 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.860167 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cdfd666b9-jkzcm"] Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.861445 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.864495 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.868928 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.882907 4644 generic.go:334] "Generic (PLEG): container finished" podID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerID="d8c455e6f38e0c6cccedfd565d04298b816fbc51955721c6d55de9cfa513b021" exitCode=0 Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.882943 4644 generic.go:334] "Generic (PLEG): container finished" podID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerID="b8d2f9d0d5eba856866eb0223eb8fe1bc3726f9794663cd8255d3d6165529fc6" exitCode=2 Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.882955 4644 generic.go:334] "Generic (PLEG): container finished" podID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerID="5884dd8335eb971ba80b017c80a1aac95b7916a46d83af9c1d0da691a592df56" exitCode=0 Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.882996 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerDied","Data":"d8c455e6f38e0c6cccedfd565d04298b816fbc51955721c6d55de9cfa513b021"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.883027 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerDied","Data":"b8d2f9d0d5eba856866eb0223eb8fe1bc3726f9794663cd8255d3d6165529fc6"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.883040 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerDied","Data":"5884dd8335eb971ba80b017c80a1aac95b7916a46d83af9c1d0da691a592df56"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.884558 4644 generic.go:334] "Generic (PLEG): container finished" podID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerID="01cc4d7faf1337a35ddc34ca19428c3bef327c8e25e15608a360ec6ea3ef6eec" exitCode=0 Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.884610 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerDied","Data":"01cc4d7faf1337a35ddc34ca19428c3bef327c8e25e15608a360ec6ea3ef6eec"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.891377 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.891537 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.897961 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerStarted","Data":"40b3b5adf9ce6ffd115ee5cdeea3d89a928883d44e2059faa9ee950e748cf3fc"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.898022 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerStarted","Data":"8f400303386c89b5f4915ede18ee680310e248ff45456398b81f1abc25bf895d"} Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.898944 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:26 crc kubenswrapper[4644]: I0204 09:00:26.905156 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cdfd666b9-jkzcm"] Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.011013 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5df75db7c8-8lxlc" podStartSLOduration=4.010996936 podStartE2EDuration="4.010996936s" podCreationTimestamp="2026-02-04 09:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:26.99014035 +0000 UTC m=+1137.030198115" watchObservedRunningTime="2026-02-04 09:00:27.010996936 +0000 UTC m=+1137.051054691" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.033707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-combined-ca-bundle\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.033863 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.033919 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-public-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.033994 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-internal-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.034102 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-ovndb-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.034169 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmjp\" (UniqueName: \"kubernetes.io/projected/e05bc597-36c9-492b-abb4-45edb814eed5-kube-api-access-snmjp\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.034371 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-httpd-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.105761 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136390 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-httpd-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136474 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-combined-ca-bundle\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136535 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136598 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-public-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136633 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-internal-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136675 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-ovndb-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.136712 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmjp\" (UniqueName: \"kubernetes.io/projected/e05bc597-36c9-492b-abb4-45edb814eed5-kube-api-access-snmjp\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.147520 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.149604 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-internal-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.173103 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-public-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.175132 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmjp\" (UniqueName: \"kubernetes.io/projected/e05bc597-36c9-492b-abb4-45edb814eed5-kube-api-access-snmjp\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.213019 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-ovndb-tls-certs\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.214134 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-combined-ca-bundle\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.225937 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e05bc597-36c9-492b-abb4-45edb814eed5-httpd-config\") pod \"neutron-5cdfd666b9-jkzcm\" (UID: \"e05bc597-36c9-492b-abb4-45edb814eed5\") " pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.305904 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.306018 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.445054 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.475315 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.925615 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerStarted","Data":"abca8bb5aed873023538d417d790b2a138c27b4ea5ecc781c6a6caa88e5f2af1"} Feb 04 09:00:27 crc kubenswrapper[4644]: I0204 09:00:27.926747 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.090400 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" podStartSLOduration=5.0903792 podStartE2EDuration="5.0903792s" podCreationTimestamp="2026-02-04 09:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:27.958922153 +0000 UTC m=+1137.998979908" watchObservedRunningTime="2026-02-04 09:00:28.0903792 +0000 UTC m=+1138.130436965" Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.097464 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cdfd666b9-jkzcm"] Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.933218 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cdfd666b9-jkzcm" event={"ID":"e05bc597-36c9-492b-abb4-45edb814eed5","Type":"ContainerStarted","Data":"daf9c5af6568d08e9d925c0ffd797690f48716d22241da4258c000e012be1800"} Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.933710 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.933722 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cdfd666b9-jkzcm" event={"ID":"e05bc597-36c9-492b-abb4-45edb814eed5","Type":"ContainerStarted","Data":"4cfad615e9fd656d5c6251ce4b1ccec797d96dbf7416153b2842b595d2cdd45e"} Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.933731 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cdfd666b9-jkzcm" event={"ID":"e05bc597-36c9-492b-abb4-45edb814eed5","Type":"ContainerStarted","Data":"339a85f547512255b2adc48f743569b63637e49f7e140d7e39c36c26fecdf8e2"} Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.936534 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/0.log" Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.936965 4644 generic.go:334] "Generic (PLEG): container finished" podID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerID="40b3b5adf9ce6ffd115ee5cdeea3d89a928883d44e2059faa9ee950e748cf3fc" exitCode=1 Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.937049 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerDied","Data":"40b3b5adf9ce6ffd115ee5cdeea3d89a928883d44e2059faa9ee950e748cf3fc"} Feb 04 09:00:28 crc kubenswrapper[4644]: I0204 09:00:28.937945 4644 scope.go:117] "RemoveContainer" containerID="40b3b5adf9ce6ffd115ee5cdeea3d89a928883d44e2059faa9ee950e748cf3fc" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.011142 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cdfd666b9-jkzcm" podStartSLOduration=3.011126554 podStartE2EDuration="3.011126554s" podCreationTimestamp="2026-02-04 09:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:28.975082545 +0000 UTC m=+1139.015140310" watchObservedRunningTime="2026-02-04 09:00:29.011126554 +0000 UTC m=+1139.051184309" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.948293 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/1.log" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.948975 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/0.log" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.949393 4644 generic.go:334] "Generic (PLEG): container finished" podID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerID="cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533" exitCode=1 Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.949452 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerDied","Data":"cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533"} Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.949490 4644 scope.go:117] "RemoveContainer" containerID="40b3b5adf9ce6ffd115ee5cdeea3d89a928883d44e2059faa9ee950e748cf3fc" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.950221 4644 scope.go:117] "RemoveContainer" containerID="cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533" Feb 04 09:00:29 crc kubenswrapper[4644]: E0204 09:00:29.950505 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-5df75db7c8-8lxlc_openstack(2cbe3b5d-7379-447e-acac-6f7306ce230f)\"" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.964821 4644 generic.go:334] "Generic (PLEG): container finished" podID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerID="fa849767dc0fad0ce5a803017dabf754eac706e29f9879429800565872437189" exitCode=0 Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.965731 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerDied","Data":"fa849767dc0fad0ce5a803017dabf754eac706e29f9879429800565872437189"} Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.965829 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e18a22-5f5b-4233-86cb-1c014ab61840","Type":"ContainerDied","Data":"27f85d3cf912538a58bc74c147131f889352e573cd15cb01463214e3afd991a6"} Feb 04 09:00:29 crc kubenswrapper[4644]: I0204 09:00:29.965893 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f85d3cf912538a58bc74c147131f889352e573cd15cb01463214e3afd991a6" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.030471 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124352 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124398 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124505 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124537 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25vcc\" (UniqueName: \"kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124578 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124605 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124660 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml\") pod \"39e18a22-5f5b-4233-86cb-1c014ab61840\" (UID: \"39e18a22-5f5b-4233-86cb-1c014ab61840\") " Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.124991 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.125372 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.126439 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.126454 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e18a22-5f5b-4233-86cb-1c014ab61840-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.133766 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts" (OuterVolumeSpecName: "scripts") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.134380 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc" (OuterVolumeSpecName: "kube-api-access-25vcc") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "kube-api-access-25vcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.157504 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.231694 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25vcc\" (UniqueName: \"kubernetes.io/projected/39e18a22-5f5b-4233-86cb-1c014ab61840-kube-api-access-25vcc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.231766 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.231783 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.242568 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.254540 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data" (OuterVolumeSpecName: "config-data") pod "39e18a22-5f5b-4233-86cb-1c014ab61840" (UID: "39e18a22-5f5b-4233-86cb-1c014ab61840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.334083 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.334115 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e18a22-5f5b-4233-86cb-1c014ab61840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.991272 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/1.log" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.993422 4644 generic.go:334] "Generic (PLEG): container finished" podID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" containerID="4a88d8a2e5ca1806dab14a6ce567ef62dc89f54c07b4be8dc54d2a78f4daf780" exitCode=0 Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.993486 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.993949 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tq9r" event={"ID":"4623241a-c4dc-4646-9b03-aa89b84ca4b1","Type":"ContainerDied","Data":"4a88d8a2e5ca1806dab14a6ce567ef62dc89f54c07b4be8dc54d2a78f4daf780"} Feb 04 09:00:30 crc kubenswrapper[4644]: I0204 09:00:30.994879 4644 scope.go:117] "RemoveContainer" containerID="cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533" Feb 04 09:00:30 crc kubenswrapper[4644]: E0204 09:00:30.996806 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-5df75db7c8-8lxlc_openstack(2cbe3b5d-7379-447e-acac-6f7306ce230f)\"" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.095527 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.113555 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.127385 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:00:31 crc kubenswrapper[4644]: E0204 09:00:31.127801 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="sg-core" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.127825 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="sg-core" Feb 04 09:00:31 crc kubenswrapper[4644]: E0204 09:00:31.127849 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-central-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.127857 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-central-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: E0204 09:00:31.127876 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-notification-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.127884 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-notification-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: E0204 09:00:31.127898 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="proxy-httpd" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.127904 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="proxy-httpd" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.128119 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-central-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.128138 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="sg-core" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.128160 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="proxy-httpd" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.128170 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" containerName="ceilometer-notification-agent" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.129847 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.134707 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.141564 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.141644 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259089 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259158 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259197 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259234 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259250 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsh92\" (UniqueName: \"kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259547 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.259597 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.360851 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.360900 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.360949 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.360991 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.361030 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.361074 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.361093 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsh92\" (UniqueName: \"kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.362049 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.366687 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.367295 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.367742 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.369414 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.374077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.380406 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsh92\" (UniqueName: \"kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92\") pod \"ceilometer-0\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " pod="openstack/ceilometer-0" Feb 04 09:00:31 crc kubenswrapper[4644]: I0204 09:00:31.466419 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.003804 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:00:32 crc kubenswrapper[4644]: W0204 09:00:32.010015 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad50edf_1565_43a4_b0c6_7aef5bc98722.slice/crio-0f0d7ea32a29835e75b900c128986ccd36e15ca782ee718a52f1bb43f3a0dcd2 WatchSource:0}: Error finding container 0f0d7ea32a29835e75b900c128986ccd36e15ca782ee718a52f1bb43f3a0dcd2: Status 404 returned error can't find the container with id 0f0d7ea32a29835e75b900c128986ccd36e15ca782ee718a52f1bb43f3a0dcd2 Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.387673 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tq9r" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.483966 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle\") pod \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.484163 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5lrj\" (UniqueName: \"kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj\") pod \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.484191 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data\") pod \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\" (UID: \"4623241a-c4dc-4646-9b03-aa89b84ca4b1\") " Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.502969 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj" (OuterVolumeSpecName: "kube-api-access-l5lrj") pod "4623241a-c4dc-4646-9b03-aa89b84ca4b1" (UID: "4623241a-c4dc-4646-9b03-aa89b84ca4b1"). InnerVolumeSpecName "kube-api-access-l5lrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.506317 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4623241a-c4dc-4646-9b03-aa89b84ca4b1" (UID: "4623241a-c4dc-4646-9b03-aa89b84ca4b1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.577508 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4623241a-c4dc-4646-9b03-aa89b84ca4b1" (UID: "4623241a-c4dc-4646-9b03-aa89b84ca4b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.586450 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.586491 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5lrj\" (UniqueName: \"kubernetes.io/projected/4623241a-c4dc-4646-9b03-aa89b84ca4b1-kube-api-access-l5lrj\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.586502 4644 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4623241a-c4dc-4646-9b03-aa89b84ca4b1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:32 crc kubenswrapper[4644]: I0204 09:00:32.673625 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e18a22-5f5b-4233-86cb-1c014ab61840" path="/var/lib/kubelet/pods/39e18a22-5f5b-4233-86cb-1c014ab61840/volumes" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.020968 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tq9r" event={"ID":"4623241a-c4dc-4646-9b03-aa89b84ca4b1","Type":"ContainerDied","Data":"2d2a856ec94687a817b78c9268cea98b1157be90c703e6df1e98ed9dc6b4dbf6"} Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.021016 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d2a856ec94687a817b78c9268cea98b1157be90c703e6df1e98ed9dc6b4dbf6" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.021033 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tq9r" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.023670 4644 generic.go:334] "Generic (PLEG): container finished" podID="c6677efd-b2e4-45b7-8703-3a189d87723d" containerID="30570130a3239b29d0b7cd583c4eefdd2b82a5f8b87fc328a416e2332a72b513" exitCode=0 Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.023737 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nlr7w" event={"ID":"c6677efd-b2e4-45b7-8703-3a189d87723d","Type":"ContainerDied","Data":"30570130a3239b29d0b7cd583c4eefdd2b82a5f8b87fc328a416e2332a72b513"} Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.026477 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerStarted","Data":"624434945be32abda71164d79ad698fbaa08ec31dc56b12297b43697c9870b12"} Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.026531 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerStarted","Data":"0f0d7ea32a29835e75b900c128986ccd36e15ca782ee718a52f1bb43f3a0dcd2"} Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.375001 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-747c75f8c-ljgzl"] Feb 04 09:00:33 crc kubenswrapper[4644]: E0204 09:00:33.375433 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" containerName="barbican-db-sync" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.375451 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" containerName="barbican-db-sync" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.375579 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" containerName="barbican-db-sync" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.376442 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.385968 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hmrt5" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.386371 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.386555 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.430079 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-747c75f8c-ljgzl"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.511934 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvx6b\" (UniqueName: \"kubernetes.io/projected/be2eab6d-9a04-400b-baa9-c20fe5fcd269-kube-api-access-fvx6b\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.512087 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.512136 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2eab6d-9a04-400b-baa9-c20fe5fcd269-logs\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.512167 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-combined-ca-bundle\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.512235 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data-custom\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.535046 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-866db59d-m5kdr"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.536934 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.543704 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.562098 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.562413 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="dnsmasq-dns" containerID="cri-o://abca8bb5aed873023538d417d790b2a138c27b4ea5ecc781c6a6caa88e5f2af1" gracePeriod=10 Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.567432 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.588043 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-866db59d-m5kdr"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617030 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvx6b\" (UniqueName: \"kubernetes.io/projected/be2eab6d-9a04-400b-baa9-c20fe5fcd269-kube-api-access-fvx6b\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617101 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617145 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2eab6d-9a04-400b-baa9-c20fe5fcd269-logs\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617180 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-combined-ca-bundle\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617214 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-logs\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617247 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data-custom\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617276 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-combined-ca-bundle\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617296 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6td\" (UniqueName: \"kubernetes.io/projected/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-kube-api-access-2l6td\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617359 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data-custom\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.617393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.630006 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2eab6d-9a04-400b-baa9-c20fe5fcd269-logs\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.663613 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-combined-ca-bundle\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.679319 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.700583 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2eab6d-9a04-400b-baa9-c20fe5fcd269-config-data-custom\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.752556 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data-custom\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.752632 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.756284 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-logs\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.756425 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6td\" (UniqueName: \"kubernetes.io/projected/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-kube-api-access-2l6td\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.757146 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-combined-ca-bundle\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.760285 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-logs\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.778962 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data-custom\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.795282 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvx6b\" (UniqueName: \"kubernetes.io/projected/be2eab6d-9a04-400b-baa9-c20fe5fcd269-kube-api-access-fvx6b\") pod \"barbican-worker-747c75f8c-ljgzl\" (UID: \"be2eab6d-9a04-400b-baa9-c20fe5fcd269\") " pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.808199 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6td\" (UniqueName: \"kubernetes.io/projected/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-kube-api-access-2l6td\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.809484 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-config-data\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.812033 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a0f2d9-bd63-4dc5-826c-5d67f92a31da-combined-ca-bundle\") pod \"barbican-keystone-listener-866db59d-m5kdr\" (UID: \"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da\") " pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.866883 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.882851 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.884752 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.920212 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.936494 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.943673 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.946053 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.948986 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.954590 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966536 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966581 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966619 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966642 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6n7p\" (UniqueName: \"kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966660 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:33 crc kubenswrapper[4644]: I0204 09:00:33.966686 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.032313 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-747c75f8c-ljgzl" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.065685 4644 generic.go:334] "Generic (PLEG): container finished" podID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerID="abca8bb5aed873023538d417d790b2a138c27b4ea5ecc781c6a6caa88e5f2af1" exitCode=0 Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.065876 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerDied","Data":"abca8bb5aed873023538d417d790b2a138c27b4ea5ecc781c6a6caa88e5f2af1"} Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102447 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102496 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102526 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102565 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102592 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102625 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102648 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6n7p\" (UniqueName: \"kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102684 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102708 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102786 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.102815 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26ns\" (UniqueName: \"kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.104553 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.105093 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.105620 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.106790 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.107336 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.149113 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6n7p\" (UniqueName: \"kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p\") pod \"dnsmasq-dns-848cf88cfc-5t55b\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.204087 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.205370 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.205408 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.205606 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.205643 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26ns\" (UniqueName: \"kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.215661 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.215942 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.232090 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.235958 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26ns\" (UniqueName: \"kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.236562 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom\") pod \"barbican-api-8697d7f674-lmsql\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.251111 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.282025 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.525288 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.614629 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.614742 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.614776 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl6wh\" (UniqueName: \"kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.614896 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.614922 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.615007 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config\") pod \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\" (UID: \"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c\") " Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.631567 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh" (OuterVolumeSpecName: "kube-api-access-zl6wh") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "kube-api-access-zl6wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.723703 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl6wh\" (UniqueName: \"kubernetes.io/projected/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-kube-api-access-zl6wh\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.751992 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.782405 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.818265 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config" (OuterVolumeSpecName: "config") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.825283 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.832236 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.832526 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.855188 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-866db59d-m5kdr"] Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.942566 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nlr7w" Feb 04 09:00:34 crc kubenswrapper[4644]: I0204 09:00:34.982572 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.011688 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" (UID: "29d48c89-782f-41fd-9fb8-f3cd6edd0a4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.042860 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data\") pod \"c6677efd-b2e4-45b7-8703-3a189d87723d\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.043249 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs\") pod \"c6677efd-b2e4-45b7-8703-3a189d87723d\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.043281 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts\") pod \"c6677efd-b2e4-45b7-8703-3a189d87723d\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.043312 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2jz5\" (UniqueName: \"kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5\") pod \"c6677efd-b2e4-45b7-8703-3a189d87723d\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.043577 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle\") pod \"c6677efd-b2e4-45b7-8703-3a189d87723d\" (UID: \"c6677efd-b2e4-45b7-8703-3a189d87723d\") " Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.044166 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.044187 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.044762 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs" (OuterVolumeSpecName: "logs") pod "c6677efd-b2e4-45b7-8703-3a189d87723d" (UID: "c6677efd-b2e4-45b7-8703-3a189d87723d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.047715 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts" (OuterVolumeSpecName: "scripts") pod "c6677efd-b2e4-45b7-8703-3a189d87723d" (UID: "c6677efd-b2e4-45b7-8703-3a189d87723d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.070920 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5" (OuterVolumeSpecName: "kube-api-access-m2jz5") pod "c6677efd-b2e4-45b7-8703-3a189d87723d" (UID: "c6677efd-b2e4-45b7-8703-3a189d87723d"). InnerVolumeSpecName "kube-api-access-m2jz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.082537 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data" (OuterVolumeSpecName: "config-data") pod "c6677efd-b2e4-45b7-8703-3a189d87723d" (UID: "c6677efd-b2e4-45b7-8703-3a189d87723d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.112257 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6677efd-b2e4-45b7-8703-3a189d87723d" (UID: "c6677efd-b2e4-45b7-8703-3a189d87723d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.121546 4644 generic.go:334] "Generic (PLEG): container finished" podID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" containerID="334bb5ebaa42cec7c5168838f937cf076e611ae95464a97820900a1878fbd00d" exitCode=0 Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.121620 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wlwtb" event={"ID":"f6df95b1-d952-4b17-bb90-2a32fecb0a5b","Type":"ContainerDied","Data":"334bb5ebaa42cec7c5168838f937cf076e611ae95464a97820900a1878fbd00d"} Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.126276 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nlr7w" event={"ID":"c6677efd-b2e4-45b7-8703-3a189d87723d","Type":"ContainerDied","Data":"aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf"} Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.126316 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf0091551a1bbbb4cc5ce222f8f84cdc4e32fcf7e2160f67a424504d89617cf" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.126393 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nlr7w" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.148833 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.148864 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6677efd-b2e4-45b7-8703-3a189d87723d-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.148875 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.148888 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2jz5\" (UniqueName: \"kubernetes.io/projected/c6677efd-b2e4-45b7-8703-3a189d87723d-kube-api-access-m2jz5\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.148903 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6677efd-b2e4-45b7-8703-3a189d87723d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.152879 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" event={"ID":"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da","Type":"ContainerStarted","Data":"03ec22eddf2dd2217fa5820fd81a09e99ae7d78d494b645926c1689895788c82"} Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.155197 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" event={"ID":"29d48c89-782f-41fd-9fb8-f3cd6edd0a4c","Type":"ContainerDied","Data":"ac57d5df5bfca740e57eb3cd254b1cb84c577b154b8bb95407b6cb0241b60cdc"} Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.155243 4644 scope.go:117] "RemoveContainer" containerID="abca8bb5aed873023538d417d790b2a138c27b4ea5ecc781c6a6caa88e5f2af1" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.155448 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2b2sv" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.181642 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerStarted","Data":"db929804af7d7d1f8ca9dbd96ece1821b498c830df60c15cf136a78965e102a5"} Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.246609 4644 scope.go:117] "RemoveContainer" containerID="01cc4d7faf1337a35ddc34ca19428c3bef327c8e25e15608a360ec6ea3ef6eec" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.260637 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.299928 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2b2sv"] Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.309606 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fc7776988-rx9dz"] Feb 04 09:00:35 crc kubenswrapper[4644]: E0204 09:00:35.310080 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="init" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.310103 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="init" Feb 04 09:00:35 crc kubenswrapper[4644]: E0204 09:00:35.310122 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" containerName="placement-db-sync" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.310129 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" containerName="placement-db-sync" Feb 04 09:00:35 crc kubenswrapper[4644]: E0204 09:00:35.310158 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="dnsmasq-dns" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.310164 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="dnsmasq-dns" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.310448 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" containerName="dnsmasq-dns" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.310470 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" containerName="placement-db-sync" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.311560 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.321829 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc7776988-rx9dz"] Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.324012 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.324554 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.325608 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zpdz8" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.325872 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.334432 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.352135 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-public-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.352396 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-internal-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.353167 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4a7be8-11a8-4974-80dc-0893a6f9c104-logs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.353285 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-scripts\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.353493 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-config-data\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.353586 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-combined-ca-bundle\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.353672 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffq78\" (UniqueName: \"kubernetes.io/projected/fe4a7be8-11a8-4974-80dc-0893a6f9c104-kube-api-access-ffq78\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.416568 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-747c75f8c-ljgzl"] Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.456479 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-internal-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.457441 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4a7be8-11a8-4974-80dc-0893a6f9c104-logs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.457875 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-scripts\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.458017 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffq78\" (UniqueName: \"kubernetes.io/projected/fe4a7be8-11a8-4974-80dc-0893a6f9c104-kube-api-access-ffq78\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.458095 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-config-data\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.458168 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-combined-ca-bundle\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.458312 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-public-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.457827 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4a7be8-11a8-4974-80dc-0893a6f9c104-logs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.467967 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-internal-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.468804 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-combined-ca-bundle\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.469264 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-public-tls-certs\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.472944 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-scripts\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.474567 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4a7be8-11a8-4974-80dc-0893a6f9c104-config-data\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.478791 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffq78\" (UniqueName: \"kubernetes.io/projected/fe4a7be8-11a8-4974-80dc-0893a6f9c104-kube-api-access-ffq78\") pod \"placement-6fc7776988-rx9dz\" (UID: \"fe4a7be8-11a8-4974-80dc-0893a6f9c104\") " pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.554580 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.554616 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.556054 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:35 crc kubenswrapper[4644]: W0204 09:00:35.584501 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a13c9a4_cb5d_415c_8515_ab0fac5f0aa2.slice/crio-57d4ed60433bebd8b953c29911c25fcb61444b2cc94c165019983f42457bb41e WatchSource:0}: Error finding container 57d4ed60433bebd8b953c29911c25fcb61444b2cc94c165019983f42457bb41e: Status 404 returned error can't find the container with id 57d4ed60433bebd8b953c29911c25fcb61444b2cc94c165019983f42457bb41e Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.586431 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.720318 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:35 crc kubenswrapper[4644]: I0204 09:00:35.865427 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:35 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:35 crc kubenswrapper[4644]: > Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.205870 4644 generic.go:334] "Generic (PLEG): container finished" podID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerID="1790dc560413ac2c27dac560219d7278a3ab993a404d025c0ae83099a2b08498" exitCode=0 Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.206200 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" event={"ID":"9b12ce39-cc90-4762-9d85-3138ee4e0bc0","Type":"ContainerDied","Data":"1790dc560413ac2c27dac560219d7278a3ab993a404d025c0ae83099a2b08498"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.206775 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" event={"ID":"9b12ce39-cc90-4762-9d85-3138ee4e0bc0","Type":"ContainerStarted","Data":"e1717c4dd247f03a52b94da454068bd842556d3d613615f3fbb7b895c4b6d487"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.242865 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-747c75f8c-ljgzl" event={"ID":"be2eab6d-9a04-400b-baa9-c20fe5fcd269","Type":"ContainerStarted","Data":"51099b11ebebd204c93b0e84765718f4e83a31a0f81268b92bda47b6c3085572"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.303846 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerStarted","Data":"97630ef708deaa5a2cd22a7967df419a0ba4b059ee7f9db1a41d9703d870ca0a"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.306154 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerStarted","Data":"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.306200 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerStarted","Data":"57d4ed60433bebd8b953c29911c25fcb61444b2cc94c165019983f42457bb41e"} Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.541014 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc7776988-rx9dz"] Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.615755 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.615826 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.616546 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea"} pod="openstack/horizon-5fb9db66f6-v84nx" containerMessage="Container horizon failed startup probe, will be restarted" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.616580 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" containerID="cri-o://8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea" gracePeriod=30 Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.674587 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.680725 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d48c89-782f-41fd-9fb8-f3cd6edd0a4c" path="/var/lib/kubelet/pods/29d48c89-782f-41fd-9fb8-f3cd6edd0a4c/volumes" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.725654 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.726551 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"95510a00131773c32fd94f8ae9454628b093e68aa915de83d3362250d0551602"} pod="openstack/horizon-658bfcb544-88gj4" containerMessage="Container horizon failed startup probe, will be restarted" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.726606 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" containerID="cri-o://95510a00131773c32fd94f8ae9454628b093e68aa915de83d3362250d0551602" gracePeriod=30 Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.742181 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wlwtb" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909169 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909552 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909576 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909661 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909915 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.909993 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.910086 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n62d\" (UniqueName: \"kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d\") pod \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\" (UID: \"f6df95b1-d952-4b17-bb90-2a32fecb0a5b\") " Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.910834 4644 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.918823 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.920467 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d" (OuterVolumeSpecName: "kube-api-access-8n62d") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "kube-api-access-8n62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.922247 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts" (OuterVolumeSpecName: "scripts") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.964490 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:36 crc kubenswrapper[4644]: I0204 09:00:36.981442 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data" (OuterVolumeSpecName: "config-data") pod "f6df95b1-d952-4b17-bb90-2a32fecb0a5b" (UID: "f6df95b1-d952-4b17-bb90-2a32fecb0a5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.015446 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n62d\" (UniqueName: \"kubernetes.io/projected/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-kube-api-access-8n62d\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.015482 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.015492 4644 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.015502 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.015512 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df95b1-d952-4b17-bb90-2a32fecb0a5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.317810 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerStarted","Data":"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51"} Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.317876 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.317889 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.327136 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc7776988-rx9dz" event={"ID":"fe4a7be8-11a8-4974-80dc-0893a6f9c104","Type":"ContainerStarted","Data":"22ffdfb81ce87fb66a96f74d731016828083f595b9a46488eab72055ff5557d9"} Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.327177 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc7776988-rx9dz" event={"ID":"fe4a7be8-11a8-4974-80dc-0893a6f9c104","Type":"ContainerStarted","Data":"bb2290b96e5d6ec77a183d5413d6ff8ef9e5987824a05680b1a7cf18dec142d5"} Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.332988 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" event={"ID":"9b12ce39-cc90-4762-9d85-3138ee4e0bc0","Type":"ContainerStarted","Data":"f1f68d938b03c87bc3658b9a217fa58cd4bb7679738b69992561720787a461be"} Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.333522 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.343625 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wlwtb" event={"ID":"f6df95b1-d952-4b17-bb90-2a32fecb0a5b","Type":"ContainerDied","Data":"92a0752ee0aad4b74673b62ed5660b1fd391c59e85385393df04c155c3b08202"} Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.343662 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a0752ee0aad4b74673b62ed5660b1fd391c59e85385393df04c155c3b08202" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.343720 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wlwtb" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.367457 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8697d7f674-lmsql" podStartSLOduration=4.367440744 podStartE2EDuration="4.367440744s" podCreationTimestamp="2026-02-04 09:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:37.355183577 +0000 UTC m=+1147.395241332" watchObservedRunningTime="2026-02-04 09:00:37.367440744 +0000 UTC m=+1147.407498499" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.493768 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" podStartSLOduration=4.493745984 podStartE2EDuration="4.493745984s" podCreationTimestamp="2026-02-04 09:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:37.406085892 +0000 UTC m=+1147.446143647" watchObservedRunningTime="2026-02-04 09:00:37.493745984 +0000 UTC m=+1147.533803739" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.509390 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-558bf4756b-n2g7b"] Feb 04 09:00:37 crc kubenswrapper[4644]: E0204 09:00:37.509798 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" containerName="cinder-db-sync" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.509816 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" containerName="cinder-db-sync" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.510013 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" containerName="cinder-db-sync" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.510950 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.516215 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.516417 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.528966 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-558bf4756b-n2g7b"] Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540270 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-logs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540347 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-internal-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540387 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-public-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540455 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pq6\" (UniqueName: \"kubernetes.io/projected/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-kube-api-access-62pq6\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540498 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-combined-ca-bundle\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540516 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.540548 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data-custom\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644693 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-combined-ca-bundle\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644746 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644798 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data-custom\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644817 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-logs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644853 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-internal-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644892 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-public-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.644955 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pq6\" (UniqueName: \"kubernetes.io/projected/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-kube-api-access-62pq6\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.647309 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-logs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.686024 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-internal-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.686151 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-combined-ca-bundle\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.707393 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.707996 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-config-data-custom\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.727803 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-public-tls-certs\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.776933 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.778843 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.794135 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mpwwp" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.794512 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.794760 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.794923 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.800847 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.819086 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pq6\" (UniqueName: \"kubernetes.io/projected/9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7-kube-api-access-62pq6\") pod \"barbican-api-558bf4756b-n2g7b\" (UID: \"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7\") " pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.858296 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgf7\" (UniqueName: \"kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.858366 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.858402 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.858558 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.858963 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.859013 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.868833 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964785 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964843 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964886 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crgf7\" (UniqueName: \"kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964915 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964952 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.964976 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.978519 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.979806 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.979910 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:37 crc kubenswrapper[4644]: I0204 09:00:37.985424 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.000647 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.018797 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crgf7\" (UniqueName: \"kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.025048 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.044517 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.047796 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.086206 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.086284 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.087577 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.087628 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvv4v\" (UniqueName: \"kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.087786 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.094702 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.113302 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.135131 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.143431 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.145620 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.151379 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.163985 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196542 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196606 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5dr\" (UniqueName: \"kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196654 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196688 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196718 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196747 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196777 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196897 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196929 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvv4v\" (UniqueName: \"kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196957 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.196982 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.197009 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.197032 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.198216 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.198923 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.201547 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.204248 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.210235 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.261266 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvv4v\" (UniqueName: \"kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v\") pod \"dnsmasq-dns-6578955fd5-mtv2p\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300625 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300768 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300794 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300814 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300833 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300858 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.300887 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5dr\" (UniqueName: \"kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.301582 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.301631 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.333196 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.339660 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.346836 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.356701 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5dr\" (UniqueName: \"kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.357022 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.358489 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " pod="openstack/cinder-api-0" Feb 04 09:00:38 crc kubenswrapper[4644]: I0204 09:00:38.647640 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.133687 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.181572 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-558bf4756b-n2g7b"] Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.351916 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.414091 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerStarted","Data":"bd06bbde6b0ee3fa4daf18441153d9637862bf4c638420d2f86ff11913f8fd1f"} Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.414267 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.426202 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="dnsmasq-dns" containerID="cri-o://f1f68d938b03c87bc3658b9a217fa58cd4bb7679738b69992561720787a461be" gracePeriod=10 Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.428491 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc7776988-rx9dz" event={"ID":"fe4a7be8-11a8-4974-80dc-0893a6f9c104","Type":"ContainerStarted","Data":"fd363ed077e142d7ac3bff99ebcd7a3db36291a3c1997a5de34a9e145cdd855e"} Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.428562 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.429301 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:00:39 crc kubenswrapper[4644]: I0204 09:00:39.463890 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.386772336 podStartE2EDuration="8.463871804s" podCreationTimestamp="2026-02-04 09:00:31 +0000 UTC" firstStartedPulling="2026-02-04 09:00:32.012888368 +0000 UTC m=+1142.052946123" lastFinishedPulling="2026-02-04 09:00:38.089987836 +0000 UTC m=+1148.130045591" observedRunningTime="2026-02-04 09:00:39.443880383 +0000 UTC m=+1149.483938138" watchObservedRunningTime="2026-02-04 09:00:39.463871804 +0000 UTC m=+1149.503929559" Feb 04 09:00:39 crc kubenswrapper[4644]: W0204 09:00:39.817991 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc84d00ec_439f_4e2a_8c88_290eb2a194ac.slice/crio-f0d831ac5289d4637391c0f5dad009d686a2794ca2820c983f6b72772519f0d0 WatchSource:0}: Error finding container f0d831ac5289d4637391c0f5dad009d686a2794ca2820c983f6b72772519f0d0: Status 404 returned error can't find the container with id f0d831ac5289d4637391c0f5dad009d686a2794ca2820c983f6b72772519f0d0 Feb 04 09:00:40 crc kubenswrapper[4644]: I0204 09:00:40.248467 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6fc7776988-rx9dz" podStartSLOduration=5.248437406 podStartE2EDuration="5.248437406s" podCreationTimestamp="2026-02-04 09:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:39.479445299 +0000 UTC m=+1149.519503054" watchObservedRunningTime="2026-02-04 09:00:40.248437406 +0000 UTC m=+1150.288495161" Feb 04 09:00:40 crc kubenswrapper[4644]: I0204 09:00:40.255905 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:00:40 crc kubenswrapper[4644]: I0204 09:00:40.449312 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" event={"ID":"c84d00ec-439f-4e2a-8c88-290eb2a194ac","Type":"ContainerStarted","Data":"f0d831ac5289d4637391c0f5dad009d686a2794ca2820c983f6b72772519f0d0"} Feb 04 09:00:40 crc kubenswrapper[4644]: I0204 09:00:40.451104 4644 generic.go:334] "Generic (PLEG): container finished" podID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerID="f1f68d938b03c87bc3658b9a217fa58cd4bb7679738b69992561720787a461be" exitCode=0 Feb 04 09:00:40 crc kubenswrapper[4644]: I0204 09:00:40.451749 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" event={"ID":"9b12ce39-cc90-4762-9d85-3138ee4e0bc0","Type":"ContainerDied","Data":"f1f68d938b03c87bc3658b9a217fa58cd4bb7679738b69992561720787a461be"} Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.248587 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278052 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278218 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278292 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278318 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6n7p\" (UniqueName: \"kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278357 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.278438 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb\") pod \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\" (UID: \"9b12ce39-cc90-4762-9d85-3138ee4e0bc0\") " Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.324947 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p" (OuterVolumeSpecName: "kube-api-access-w6n7p") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "kube-api-access-w6n7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.380614 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6n7p\" (UniqueName: \"kubernetes.io/projected/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-kube-api-access-w6n7p\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.463554 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.475632 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" event={"ID":"9b12ce39-cc90-4762-9d85-3138ee4e0bc0","Type":"ContainerDied","Data":"e1717c4dd247f03a52b94da454068bd842556d3d613615f3fbb7b895c4b6d487"} Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.475862 4644 scope.go:117] "RemoveContainer" containerID="f1f68d938b03c87bc3658b9a217fa58cd4bb7679738b69992561720787a461be" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.475694 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5t55b" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.482553 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.490544 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerStarted","Data":"0fad0d5a7ae0f27074e380b8b004cd1d1c4ac1aedfe7db1a6017ccc70c6bee10"} Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.501825 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558bf4756b-n2g7b" event={"ID":"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7","Type":"ContainerStarted","Data":"2c1e225927c1ef761637d623adf60059b5bbb84c27266669f6b3baed940dc713"} Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.532378 4644 scope.go:117] "RemoveContainer" containerID="1790dc560413ac2c27dac560219d7278a3ab993a404d025c0ae83099a2b08498" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.567866 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config" (OuterVolumeSpecName: "config") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.588640 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.670102 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.690532 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.710648 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.732545 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b12ce39-cc90-4762-9d85-3138ee4e0bc0" (UID: "9b12ce39-cc90-4762-9d85-3138ee4e0bc0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.796989 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.797024 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b12ce39-cc90-4762-9d85-3138ee4e0bc0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.836723 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:41 crc kubenswrapper[4644]: I0204 09:00:41.864841 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5t55b"] Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.106923 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:00:42 crc kubenswrapper[4644]: W0204 09:00:42.111233 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaf68dbc_3305_4745_b403_c5426622f8ed.slice/crio-12995f62c05743c6b8d1d4af7d9abd4d6a681c0f9a62a2e746acc1d15cbdbfa0 WatchSource:0}: Error finding container 12995f62c05743c6b8d1d4af7d9abd4d6a681c0f9a62a2e746acc1d15cbdbfa0: Status 404 returned error can't find the container with id 12995f62c05743c6b8d1d4af7d9abd4d6a681c0f9a62a2e746acc1d15cbdbfa0 Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.523835 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558bf4756b-n2g7b" event={"ID":"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7","Type":"ContainerStarted","Data":"ba0e323d8174d7c2e375c76f8548274ce6290616a6f11ec1ada29b7957c716b4"} Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.533613 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-747c75f8c-ljgzl" event={"ID":"be2eab6d-9a04-400b-baa9-c20fe5fcd269","Type":"ContainerStarted","Data":"4681c2c6bf7c15a5248705759990a8f52b083412a81027c1153134eb6e57783f"} Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.538972 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" event={"ID":"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da","Type":"ContainerStarted","Data":"ea65550875e62113f33e0d579e29a47bf9498c6d2bdcd1297dad11eebcd6b22c"} Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.549574 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerStarted","Data":"12995f62c05743c6b8d1d4af7d9abd4d6a681c0f9a62a2e746acc1d15cbdbfa0"} Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.564126 4644 generic.go:334] "Generic (PLEG): container finished" podID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerID="2bcc80d100e35eb6434d8ba35d000a38a8e6c29dead3b1bac14c9778d67456a7" exitCode=0 Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.564166 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" event={"ID":"c84d00ec-439f-4e2a-8c88-290eb2a194ac","Type":"ContainerDied","Data":"2bcc80d100e35eb6434d8ba35d000a38a8e6c29dead3b1bac14c9778d67456a7"} Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.566731 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" podStartSLOduration=3.328880276 podStartE2EDuration="9.566706438s" podCreationTimestamp="2026-02-04 09:00:33 +0000 UTC" firstStartedPulling="2026-02-04 09:00:34.922936894 +0000 UTC m=+1144.962994649" lastFinishedPulling="2026-02-04 09:00:41.160763066 +0000 UTC m=+1151.200820811" observedRunningTime="2026-02-04 09:00:42.562400953 +0000 UTC m=+1152.602458718" watchObservedRunningTime="2026-02-04 09:00:42.566706438 +0000 UTC m=+1152.606764193" Feb 04 09:00:42 crc kubenswrapper[4644]: I0204 09:00:42.727480 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" path="/var/lib/kubelet/pods/9b12ce39-cc90-4762-9d85-3138ee4e0bc0/volumes" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.610176 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558bf4756b-n2g7b" event={"ID":"9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7","Type":"ContainerStarted","Data":"c0d53bd8ba3741388c53745ab49260ed6813fcfe0f6ad011262c7d0996fd66ba"} Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.610571 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.610586 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.630230 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-747c75f8c-ljgzl" event={"ID":"be2eab6d-9a04-400b-baa9-c20fe5fcd269","Type":"ContainerStarted","Data":"781778b38420bea3f5be87ab7e2a3a39925947c40ea4048bbdb40bf9aa92dcf3"} Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.649788 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-866db59d-m5kdr" event={"ID":"f0a0f2d9-bd63-4dc5-826c-5d67f92a31da","Type":"ContainerStarted","Data":"e513ebcfcd634371f6e583499f824e9daca4cef227b45a0e6f5eba2229a4580a"} Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.653244 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerStarted","Data":"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516"} Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.657620 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-558bf4756b-n2g7b" podStartSLOduration=6.657608269 podStartE2EDuration="6.657608269s" podCreationTimestamp="2026-02-04 09:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:43.657148977 +0000 UTC m=+1153.697206732" watchObservedRunningTime="2026-02-04 09:00:43.657608269 +0000 UTC m=+1153.697666024" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.661427 4644 scope.go:117] "RemoveContainer" containerID="cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.664161 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" event={"ID":"c84d00ec-439f-4e2a-8c88-290eb2a194ac","Type":"ContainerStarted","Data":"b34535d9f9bd2e647c3dc0a580e87c2ecda51eadecc75c88bf1439c33ce300ff"} Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.664799 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.696337 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-747c75f8c-ljgzl" podStartSLOduration=4.880408139 podStartE2EDuration="10.696306768s" podCreationTimestamp="2026-02-04 09:00:33 +0000 UTC" firstStartedPulling="2026-02-04 09:00:35.416654977 +0000 UTC m=+1145.456712732" lastFinishedPulling="2026-02-04 09:00:41.232553606 +0000 UTC m=+1151.272611361" observedRunningTime="2026-02-04 09:00:43.692782625 +0000 UTC m=+1153.732840380" watchObservedRunningTime="2026-02-04 09:00:43.696306768 +0000 UTC m=+1153.736364523" Feb 04 09:00:43 crc kubenswrapper[4644]: I0204 09:00:43.726947 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" podStartSLOduration=6.726932283 podStartE2EDuration="6.726932283s" podCreationTimestamp="2026-02-04 09:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:43.725985098 +0000 UTC m=+1153.766042853" watchObservedRunningTime="2026-02-04 09:00:43.726932283 +0000 UTC m=+1153.766990038" Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.686177 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerStarted","Data":"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa"} Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.686500 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.686200 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api-log" containerID="cri-o://f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516" gracePeriod=30 Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.686582 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api" containerID="cri-o://0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa" gracePeriod=30 Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.701362 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerStarted","Data":"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6"} Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.710528 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/1.log" Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.714551 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerStarted","Data":"88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111"} Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.715006 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:44 crc kubenswrapper[4644]: I0204 09:00:44.729762 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.7297454309999996 podStartE2EDuration="6.729745431s" podCreationTimestamp="2026-02-04 09:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:00:44.72486295 +0000 UTC m=+1154.764920715" watchObservedRunningTime="2026-02-04 09:00:44.729745431 +0000 UTC m=+1154.769803176" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.328586 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.728011 4644 generic.go:334] "Generic (PLEG): container finished" podID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerID="f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516" exitCode=143 Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.728286 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerDied","Data":"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516"} Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.732526 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerStarted","Data":"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5"} Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.736735 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/2.log" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.737899 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/1.log" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.738788 4644 generic.go:334] "Generic (PLEG): container finished" podID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerID="88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111" exitCode=1 Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.739774 4644 scope.go:117] "RemoveContainer" containerID="88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111" Feb 04 09:00:45 crc kubenswrapper[4644]: E0204 09:00:45.739939 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-5df75db7c8-8lxlc_openstack(2cbe3b5d-7379-447e-acac-6f7306ce230f)\"" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.740170 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerDied","Data":"88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111"} Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.740197 4644 scope.go:117] "RemoveContainer" containerID="cb6cfc34a338bc735b4c1772b66035ccede7e4cff40fff8f8b5238a7eb360533" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.766909 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.320419121 podStartE2EDuration="8.766893651s" podCreationTimestamp="2026-02-04 09:00:37 +0000 UTC" firstStartedPulling="2026-02-04 09:00:41.093090106 +0000 UTC m=+1151.133147881" lastFinishedPulling="2026-02-04 09:00:42.539564656 +0000 UTC m=+1152.579622411" observedRunningTime="2026-02-04 09:00:45.759056122 +0000 UTC m=+1155.799113877" watchObservedRunningTime="2026-02-04 09:00:45.766893651 +0000 UTC m=+1155.806951406" Feb 04 09:00:45 crc kubenswrapper[4644]: I0204 09:00:45.922587 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:45 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:45 crc kubenswrapper[4644]: > Feb 04 09:00:46 crc kubenswrapper[4644]: I0204 09:00:46.748538 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/2.log" Feb 04 09:00:46 crc kubenswrapper[4644]: I0204 09:00:46.750511 4644 scope.go:117] "RemoveContainer" containerID="88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111" Feb 04 09:00:46 crc kubenswrapper[4644]: E0204 09:00:46.750708 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-5df75db7c8-8lxlc_openstack(2cbe3b5d-7379-447e-acac-6f7306ce230f)\"" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" Feb 04 09:00:47 crc kubenswrapper[4644]: I0204 09:00:47.607615 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:47 crc kubenswrapper[4644]: I0204 09:00:47.728162 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-766cbd9f4b-bj8dc" Feb 04 09:00:47 crc kubenswrapper[4644]: I0204 09:00:47.804924 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.136606 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.358556 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.498916 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.499137 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="dnsmasq-dns" containerID="cri-o://f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74" gracePeriod=10 Feb 04 09:00:48 crc kubenswrapper[4644]: E0204 09:00:48.746351 4644 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ee070eb_92be_4710_904a_7dab66158ee2.slice/crio-f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74.scope\": RecentStats: unable to find data in memory cache]" Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.821450 4644 generic.go:334] "Generic (PLEG): container finished" podID="5ee070eb-92be-4710-904a-7dab66158ee2" containerID="f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74" exitCode=0 Feb 04 09:00:48 crc kubenswrapper[4644]: I0204 09:00:48.821503 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" event={"ID":"5ee070eb-92be-4710-904a-7dab66158ee2","Type":"ContainerDied","Data":"f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74"} Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.381008 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 04 09:00:49 crc kubenswrapper[4644]: E0204 09:00:49.413245 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="init" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.413277 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="init" Feb 04 09:00:49 crc kubenswrapper[4644]: E0204 09:00:49.413317 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="dnsmasq-dns" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.413348 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="dnsmasq-dns" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.413577 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b12ce39-cc90-4762-9d85-3138ee4e0bc0" containerName="dnsmasq-dns" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.414391 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.414542 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.419814 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6bp82" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.419970 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.420094 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.527416 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.527531 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.527570 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzdk\" (UniqueName: \"kubernetes.io/projected/d8596785-f659-4038-ac9a-a48c9a4dbd44-kube-api-access-qvzdk\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.527595 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.603267 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.628960 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.629065 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.629111 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvzdk\" (UniqueName: \"kubernetes.io/projected/d8596785-f659-4038-ac9a-a48c9a4dbd44-kube-api-access-qvzdk\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.629134 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.630194 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.677036 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.677569 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8596785-f659-4038-ac9a-a48c9a4dbd44-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.704881 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvzdk\" (UniqueName: \"kubernetes.io/projected/d8596785-f659-4038-ac9a-a48c9a4dbd44-kube-api-access-qvzdk\") pod \"openstackclient\" (UID: \"d8596785-f659-4038-ac9a-a48c9a4dbd44\") " pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.729972 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.730034 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.730082 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.730156 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.730243 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvtg9\" (UniqueName: \"kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.730429 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb\") pod \"5ee070eb-92be-4710-904a-7dab66158ee2\" (UID: \"5ee070eb-92be-4710-904a-7dab66158ee2\") " Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.756571 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9" (OuterVolumeSpecName: "kube-api-access-nvtg9") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "kube-api-access-nvtg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.772727 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.835636 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvtg9\" (UniqueName: \"kubernetes.io/projected/5ee070eb-92be-4710-904a-7dab66158ee2-kube-api-access-nvtg9\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.845116 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.914538 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" event={"ID":"5ee070eb-92be-4710-904a-7dab66158ee2","Type":"ContainerDied","Data":"4096f85f0cc4f281b7b51e58d920fff429a9d74e68cf3e0e7bb70097b926c5ef"} Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.914590 4644 scope.go:117] "RemoveContainer" containerID="f3ff0f7bc6eb28f82a8f0abd05ef4e2f0ff91d790ec257e8c85ddd520a423d74" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.914767 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s25jz" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.925753 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.926196 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.939385 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.939420 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.939429 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.955774 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.959829 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config" (OuterVolumeSpecName: "config") pod "5ee070eb-92be-4710-904a-7dab66158ee2" (UID: "5ee070eb-92be-4710-904a-7dab66158ee2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:00:49 crc kubenswrapper[4644]: I0204 09:00:49.987177 4644 scope.go:117] "RemoveContainer" containerID="f4636e6da88d5536a87b299e5120657b759c1aff4be05efcc9012fe69207f516" Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.040570 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.040603 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee070eb-92be-4710-904a-7dab66158ee2-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.270806 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.296420 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s25jz"] Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.591307 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.690709 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" path="/var/lib/kubelet/pods/5ee070eb-92be-4710-904a-7dab66158ee2/volumes" Feb 04 09:00:50 crc kubenswrapper[4644]: I0204 09:00:50.926304 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d8596785-f659-4038-ac9a-a48c9a4dbd44","Type":"ContainerStarted","Data":"89a457dbdced76fef485fedd346ebb827b9ce4713bbeddd150bed3d337c348f1"} Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.873204 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.873814 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-558bf4756b-n2g7b" Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.979500 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.980558 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" containerID="cri-o://d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d" gracePeriod=30 Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.981099 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" containerID="cri-o://fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51" gracePeriod=30 Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.990300 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.990500 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 04 09:00:51 crc kubenswrapper[4644]: I0204 09:00:51.990729 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 04 09:00:52 crc kubenswrapper[4644]: I0204 09:00:52.947836 4644 generic.go:334] "Generic (PLEG): container finished" podID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerID="d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d" exitCode=143 Feb 04 09:00:52 crc kubenswrapper[4644]: I0204 09:00:52.947887 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerDied","Data":"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d"} Feb 04 09:00:53 crc kubenswrapper[4644]: I0204 09:00:53.541453 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 04 09:00:53 crc kubenswrapper[4644]: I0204 09:00:53.609591 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:53 crc kubenswrapper[4644]: I0204 09:00:53.964040 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="cinder-scheduler" containerID="cri-o://de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6" gracePeriod=30 Feb 04 09:00:53 crc kubenswrapper[4644]: I0204 09:00:53.964070 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="probe" containerID="cri-o://d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5" gracePeriod=30 Feb 04 09:00:54 crc kubenswrapper[4644]: I0204 09:00:54.010583 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:00:54 crc kubenswrapper[4644]: I0204 09:00:54.011548 4644 scope.go:117] "RemoveContainer" containerID="88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111" Feb 04 09:00:54 crc kubenswrapper[4644]: E0204 09:00:54.011826 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-5df75db7c8-8lxlc_openstack(2cbe3b5d-7379-447e-acac-6f7306ce230f)\"" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" Feb 04 09:00:54 crc kubenswrapper[4644]: I0204 09:00:54.018474 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Feb 04 09:00:54 crc kubenswrapper[4644]: I0204 09:00:54.977799 4644 generic.go:334] "Generic (PLEG): container finished" podID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerID="d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5" exitCode=0 Feb 04 09:00:54 crc kubenswrapper[4644]: I0204 09:00:54.977871 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerDied","Data":"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5"} Feb 04 09:00:55 crc kubenswrapper[4644]: I0204 09:00:55.940126 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:55 crc kubenswrapper[4644]: I0204 09:00:55.940673 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:00:55 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:00:55 crc kubenswrapper[4644]: > Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.011874 4644 generic.go:334] "Generic (PLEG): container finished" podID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerID="de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6" exitCode=0 Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.011919 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerDied","Data":"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6"} Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.011951 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5030b5b9-3a84-4316-8559-7205ae9b179e","Type":"ContainerDied","Data":"0fad0d5a7ae0f27074e380b8b004cd1d1c4ac1aedfe7db1a6017ccc70c6bee10"} Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.011969 4644 scope.go:117] "RemoveContainer" containerID="d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.012133 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.055862 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.055923 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.055987 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crgf7\" (UniqueName: \"kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.056060 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.056087 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.056107 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts\") pod \"5030b5b9-3a84-4316-8559-7205ae9b179e\" (UID: \"5030b5b9-3a84-4316-8559-7205ae9b179e\") " Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.059024 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.064842 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7" (OuterVolumeSpecName: "kube-api-access-crgf7") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "kube-api-access-crgf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.066581 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.072247 4644 scope.go:117] "RemoveContainer" containerID="de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.073473 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts" (OuterVolumeSpecName: "scripts") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.158452 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.160034 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.160063 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crgf7\" (UniqueName: \"kubernetes.io/projected/5030b5b9-3a84-4316-8559-7205ae9b179e-kube-api-access-crgf7\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.160073 4644 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5030b5b9-3a84-4316-8559-7205ae9b179e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.160082 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.160090 4644 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.186957 4644 scope.go:117] "RemoveContainer" containerID="d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5" Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.187777 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5\": container with ID starting with d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5 not found: ID does not exist" containerID="d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.187820 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5"} err="failed to get container status \"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5\": rpc error: code = NotFound desc = could not find container \"d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5\": container with ID starting with d6ea6c32e0b139d6be544dd124878a80e89614e0b4cd72706f57eef38f4d4fc5 not found: ID does not exist" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.187847 4644 scope.go:117] "RemoveContainer" containerID="de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6" Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.188584 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6\": container with ID starting with de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6 not found: ID does not exist" containerID="de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.188616 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6"} err="failed to get container status \"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6\": rpc error: code = NotFound desc = could not find container \"de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6\": container with ID starting with de01cccc0a1b44c5242db56ad3fb3050e408dd612205632d55fbc97ad3e8c2d6 not found: ID does not exist" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.209479 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data" (OuterVolumeSpecName: "config-data") pod "5030b5b9-3a84-4316-8559-7205ae9b179e" (UID: "5030b5b9-3a84-4316-8559-7205ae9b179e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.261495 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5030b5b9-3a84-4316-8559-7205ae9b179e-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.351229 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.371150 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.383897 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.384272 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="cinder-scheduler" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384288 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="cinder-scheduler" Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.384295 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="init" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384302 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="init" Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.384335 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="dnsmasq-dns" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384341 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="dnsmasq-dns" Feb 04 09:00:56 crc kubenswrapper[4644]: E0204 09:00:56.384354 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="probe" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384368 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="probe" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384524 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="cinder-scheduler" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384535 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" containerName="probe" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.384549 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee070eb-92be-4710-904a-7dab66158ee2" containerName="dnsmasq-dns" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.385474 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.390863 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.403730 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.464996 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khnr\" (UniqueName: \"kubernetes.io/projected/e01886c2-fe24-4f65-9ace-d48998f27c65-kube-api-access-7khnr\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.465072 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-scripts\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.465121 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.465157 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.465191 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.465215 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01886c2-fe24-4f65-9ace-d48998f27c65-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.475470 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:47258->10.217.0.162:9311: read: connection reset by peer" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.475487 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8697d7f674-lmsql" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:47262->10.217.0.162:9311: read: connection reset by peer" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566813 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khnr\" (UniqueName: \"kubernetes.io/projected/e01886c2-fe24-4f65-9ace-d48998f27c65-kube-api-access-7khnr\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566898 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-scripts\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566920 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566942 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566968 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.566982 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01886c2-fe24-4f65-9ace-d48998f27c65-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.567091 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01886c2-fe24-4f65-9ace-d48998f27c65-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.571614 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.572895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-scripts\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.573720 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-config-data\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.582408 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01886c2-fe24-4f65-9ace-d48998f27c65-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.586111 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khnr\" (UniqueName: \"kubernetes.io/projected/e01886c2-fe24-4f65-9ace-d48998f27c65-kube-api-access-7khnr\") pod \"cinder-scheduler-0\" (UID: \"e01886c2-fe24-4f65-9ace-d48998f27c65\") " pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.689944 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5030b5b9-3a84-4316-8559-7205ae9b179e" path="/var/lib/kubelet/pods/5030b5b9-3a84-4316-8559-7205ae9b179e/volumes" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.727009 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 09:00:56 crc kubenswrapper[4644]: I0204 09:00:56.880863 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.025865 4644 generic.go:334] "Generic (PLEG): container finished" podID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerID="fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51" exitCode=0 Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.025919 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerDied","Data":"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51"} Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.025944 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697d7f674-lmsql" event={"ID":"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2","Type":"ContainerDied","Data":"57d4ed60433bebd8b953c29911c25fcb61444b2cc94c165019983f42457bb41e"} Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.025960 4644 scope.go:117] "RemoveContainer" containerID="fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.026048 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697d7f674-lmsql" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.078543 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom\") pod \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.078599 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle\") pod \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.078649 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data\") pod \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.078731 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b26ns\" (UniqueName: \"kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns\") pod \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.078801 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs\") pod \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\" (UID: \"8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2\") " Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.082211 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs" (OuterVolumeSpecName: "logs") pod "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" (UID: "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.086451 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns" (OuterVolumeSpecName: "kube-api-access-b26ns") pod "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" (UID: "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2"). InnerVolumeSpecName "kube-api-access-b26ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.089931 4644 scope.go:117] "RemoveContainer" containerID="d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.117187 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" (UID: "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.153641 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" (UID: "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.180279 4644 scope.go:117] "RemoveContainer" containerID="fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.180956 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.180976 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b26ns\" (UniqueName: \"kubernetes.io/projected/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-kube-api-access-b26ns\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.180988 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.180997 4644 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:57 crc kubenswrapper[4644]: E0204 09:00:57.181201 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51\": container with ID starting with fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51 not found: ID does not exist" containerID="fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.181275 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51"} err="failed to get container status \"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51\": rpc error: code = NotFound desc = could not find container \"fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51\": container with ID starting with fc75c32e5004f2711da02dbf6ac1e60342dc512564cfd2b9f5d19f28447dcf51 not found: ID does not exist" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.181371 4644 scope.go:117] "RemoveContainer" containerID="d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.184508 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data" (OuterVolumeSpecName: "config-data") pod "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" (UID: "8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:00:57 crc kubenswrapper[4644]: E0204 09:00:57.184535 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d\": container with ID starting with d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d not found: ID does not exist" containerID="d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.184569 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d"} err="failed to get container status \"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d\": rpc error: code = NotFound desc = could not find container \"d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d\": container with ID starting with d00befb941562b351226d9d13c03307aded65f10af25bb2456cebe7f77a6e69d not found: ID does not exist" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.282167 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.308144 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 09:00:57 crc kubenswrapper[4644]: W0204 09:00:57.318765 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01886c2_fe24_4f65_9ace_d48998f27c65.slice/crio-5183f5fede34c5d064399155537ab1dd60c68195da3fd91bc4cdb2200a78571a WatchSource:0}: Error finding container 5183f5fede34c5d064399155537ab1dd60c68195da3fd91bc4cdb2200a78571a: Status 404 returned error can't find the container with id 5183f5fede34c5d064399155537ab1dd60c68195da3fd91bc4cdb2200a78571a Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.361056 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.367722 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8697d7f674-lmsql"] Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.501470 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cdfd666b9-jkzcm" Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.594009 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.594298 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5df75db7c8-8lxlc" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-api" containerID="cri-o://8f400303386c89b5f4915ede18ee680310e248ff45456398b81f1abc25bf895d" gracePeriod=30 Feb 04 09:00:57 crc kubenswrapper[4644]: I0204 09:00:57.837440 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 04 09:00:58 crc kubenswrapper[4644]: I0204 09:00:58.098576 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e01886c2-fe24-4f65-9ace-d48998f27c65","Type":"ContainerStarted","Data":"5183f5fede34c5d064399155537ab1dd60c68195da3fd91bc4cdb2200a78571a"} Feb 04 09:00:58 crc kubenswrapper[4644]: I0204 09:00:58.680181 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" path="/var/lib/kubelet/pods/8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2/volumes" Feb 04 09:00:59 crc kubenswrapper[4644]: I0204 09:00:59.115000 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e01886c2-fe24-4f65-9ace-d48998f27c65","Type":"ContainerStarted","Data":"dd41ef4f26e3a16bebcea55f57ec3208bb3a1f30361726d2cbd9105cb7cc03ea"} Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.140232 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e01886c2-fe24-4f65-9ace-d48998f27c65","Type":"ContainerStarted","Data":"4acaa5e903cecd105d2fc88041329e883cc66c1d795118ae5240887f9ff78e3c"} Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.153355 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29503261-bsnnv"] Feb 04 09:01:00 crc kubenswrapper[4644]: E0204 09:01:00.153717 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.153741 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" Feb 04 09:01:00 crc kubenswrapper[4644]: E0204 09:01:00.153770 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.153776 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.153940 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api-log" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.153962 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a13c9a4-cb5d-415c-8515-ab0fac5f0aa2" containerName="barbican-api" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.157081 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.163828 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.163926 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.163959 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.163991 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.195407 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503261-bsnnv"] Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.200951 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.200933394 podStartE2EDuration="4.200933394s" podCreationTimestamp="2026-02-04 09:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:00.155828983 +0000 UTC m=+1170.195886748" watchObservedRunningTime="2026-02-04 09:01:00.200933394 +0000 UTC m=+1170.240991149" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.266021 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.266603 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.266687 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.266818 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.275671 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.280575 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.285528 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.307444 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle\") pod \"keystone-cron-29503261-bsnnv\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.492630 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.610951 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-69d495f767-hzkrb"] Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.613010 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.618656 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.618872 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.623530 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.634603 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69d495f767-hzkrb"] Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683312 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p956g\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-kube-api-access-p956g\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683426 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-run-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-etc-swift\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683511 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-log-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683557 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-config-data\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683622 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-public-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683670 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-combined-ca-bundle\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.683698 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-internal-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.785501 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p956g\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-kube-api-access-p956g\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.785969 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-run-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786029 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-etc-swift\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786130 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-log-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786185 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-config-data\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786296 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-public-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786432 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-combined-ca-bundle\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786476 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-internal-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.786957 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-log-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.787031 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a04a95b-5411-483c-a0de-408fa44500e0-run-httpd\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.803030 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-etc-swift\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.807778 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-internal-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.807977 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-public-tls-certs\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.808680 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p956g\" (UniqueName: \"kubernetes.io/projected/6a04a95b-5411-483c-a0de-408fa44500e0-kube-api-access-p956g\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.816604 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-config-data\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.842676 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04a95b-5411-483c-a0de-408fa44500e0-combined-ca-bundle\") pod \"swift-proxy-69d495f767-hzkrb\" (UID: \"6a04a95b-5411-483c-a0de-408fa44500e0\") " pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:00 crc kubenswrapper[4644]: I0204 09:01:00.942865 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:01 crc kubenswrapper[4644]: I0204 09:01:01.507893 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 09:01:01 crc kubenswrapper[4644]: I0204 09:01:01.727524 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 04 09:01:02 crc kubenswrapper[4644]: I0204 09:01:02.163564 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/2.log" Feb 04 09:01:02 crc kubenswrapper[4644]: I0204 09:01:02.164501 4644 generic.go:334] "Generic (PLEG): container finished" podID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerID="8f400303386c89b5f4915ede18ee680310e248ff45456398b81f1abc25bf895d" exitCode=0 Feb 04 09:01:02 crc kubenswrapper[4644]: I0204 09:01:02.165359 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerDied","Data":"8f400303386c89b5f4915ede18ee680310e248ff45456398b81f1abc25bf895d"} Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.655183 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.655562 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="sg-core" containerID="cri-o://97630ef708deaa5a2cd22a7967df419a0ba4b059ee7f9db1a41d9703d870ca0a" gracePeriod=30 Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.655617 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-notification-agent" containerID="cri-o://db929804af7d7d1f8ca9dbd96ece1821b498c830df60c15cf136a78965e102a5" gracePeriod=30 Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.655690 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="proxy-httpd" containerID="cri-o://bd06bbde6b0ee3fa4daf18441153d9637862bf4c638420d2f86ff11913f8fd1f" gracePeriod=30 Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.655767 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-central-agent" containerID="cri-o://624434945be32abda71164d79ad698fbaa08ec31dc56b12297b43697c9870b12" gracePeriod=30 Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.861127 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.861567 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-log" containerID="cri-o://b0a601c8952550240f5fe1c6167df25d68c6543a1cbc4347b12d4db6219af241" gracePeriod=30 Feb 04 09:01:03 crc kubenswrapper[4644]: I0204 09:01:03.861989 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-httpd" containerID="cri-o://c0076b247a60824499e73ef429b4bc6df4a0f9f582e714949ebeefbab1dd57db" gracePeriod=30 Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.230666 4644 generic.go:334] "Generic (PLEG): container finished" podID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerID="bd06bbde6b0ee3fa4daf18441153d9637862bf4c638420d2f86ff11913f8fd1f" exitCode=0 Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.230704 4644 generic.go:334] "Generic (PLEG): container finished" podID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerID="97630ef708deaa5a2cd22a7967df419a0ba4b059ee7f9db1a41d9703d870ca0a" exitCode=2 Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.230751 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerDied","Data":"bd06bbde6b0ee3fa4daf18441153d9637862bf4c638420d2f86ff11913f8fd1f"} Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.230782 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerDied","Data":"97630ef708deaa5a2cd22a7967df419a0ba4b059ee7f9db1a41d9703d870ca0a"} Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.234157 4644 generic.go:334] "Generic (PLEG): container finished" podID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerID="b0a601c8952550240f5fe1c6167df25d68c6543a1cbc4347b12d4db6219af241" exitCode=143 Feb 04 09:01:04 crc kubenswrapper[4644]: I0204 09:01:04.234250 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerDied","Data":"b0a601c8952550240f5fe1c6167df25d68c6543a1cbc4347b12d4db6219af241"} Feb 04 09:01:05 crc kubenswrapper[4644]: I0204 09:01:05.246507 4644 generic.go:334] "Generic (PLEG): container finished" podID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerID="624434945be32abda71164d79ad698fbaa08ec31dc56b12297b43697c9870b12" exitCode=0 Feb 04 09:01:05 crc kubenswrapper[4644]: I0204 09:01:05.246676 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerDied","Data":"624434945be32abda71164d79ad698fbaa08ec31dc56b12297b43697c9870b12"} Feb 04 09:01:05 crc kubenswrapper[4644]: I0204 09:01:05.555263 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:01:05 crc kubenswrapper[4644]: I0204 09:01:05.555680 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:01:05 crc kubenswrapper[4644]: I0204 09:01:05.883786 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:01:05 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:01:05 crc kubenswrapper[4644]: > Feb 04 09:01:06 crc kubenswrapper[4644]: I0204 09:01:06.957556 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.285006 4644 generic.go:334] "Generic (PLEG): container finished" podID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerID="db929804af7d7d1f8ca9dbd96ece1821b498c830df60c15cf136a78965e102a5" exitCode=0 Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.285104 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerDied","Data":"db929804af7d7d1f8ca9dbd96ece1821b498c830df60c15cf136a78965e102a5"} Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.302309 4644 generic.go:334] "Generic (PLEG): container finished" podID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerID="8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea" exitCode=137 Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.302423 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerDied","Data":"8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea"} Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.308666 4644 generic.go:334] "Generic (PLEG): container finished" podID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerID="95510a00131773c32fd94f8ae9454628b093e68aa915de83d3362250d0551602" exitCode=137 Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.308720 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658bfcb544-88gj4" event={"ID":"676db25f-e0ad-48cc-af2c-88029d6eb80d","Type":"ContainerDied","Data":"95510a00131773c32fd94f8ae9454628b093e68aa915de83d3362250d0551602"} Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.384107 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:51308->10.217.0.152:9292: read: connection reset by peer" Feb 04 09:01:07 crc kubenswrapper[4644]: I0204 09:01:07.384197 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:51292->10.217.0.152:9292: read: connection reset by peer" Feb 04 09:01:08 crc kubenswrapper[4644]: I0204 09:01:08.048061 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:01:08 crc kubenswrapper[4644]: I0204 09:01:08.054421 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fc7776988-rx9dz" Feb 04 09:01:08 crc kubenswrapper[4644]: I0204 09:01:08.337665 4644 generic.go:334] "Generic (PLEG): container finished" podID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerID="c0076b247a60824499e73ef429b4bc6df4a0f9f582e714949ebeefbab1dd57db" exitCode=0 Feb 04 09:01:08 crc kubenswrapper[4644]: I0204 09:01:08.338430 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerDied","Data":"c0076b247a60824499e73ef429b4bc6df4a0f9f582e714949ebeefbab1dd57db"} Feb 04 09:01:09 crc kubenswrapper[4644]: I0204 09:01:09.070189 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:09 crc kubenswrapper[4644]: I0204 09:01:09.071236 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-log" containerID="cri-o://f9608a5e50f74d623dd19b8d3c553a47020128c3f555e1ab6c2d0bf05a114380" gracePeriod=30 Feb 04 09:01:09 crc kubenswrapper[4644]: I0204 09:01:09.071716 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-httpd" containerID="cri-o://c88267f2681a46d2aa4025afe5a03dafedb80238e4041bd520cda47a5fd6a8ba" gracePeriod=30 Feb 04 09:01:09 crc kubenswrapper[4644]: I0204 09:01:09.367720 4644 generic.go:334] "Generic (PLEG): container finished" podID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerID="f9608a5e50f74d623dd19b8d3c553a47020128c3f555e1ab6c2d0bf05a114380" exitCode=143 Feb 04 09:01:09 crc kubenswrapper[4644]: I0204 09:01:09.367763 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerDied","Data":"f9608a5e50f74d623dd19b8d3c553a47020128c3f555e1ab6c2d0bf05a114380"} Feb 04 09:01:10 crc kubenswrapper[4644]: E0204 09:01:10.081590 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 04 09:01:10 crc kubenswrapper[4644]: E0204 09:01:10.081962 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch6bh569h69h644h66bhb8h5fch5bh75h576h696h75h5c4h595hffhc9h654h566h64fh64fh67dh5f5hd9h654h5fh64ch5d4hbh56chc4hf9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvzdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(d8596785-f659-4038-ac9a-a48c9a4dbd44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 09:01:10 crc kubenswrapper[4644]: E0204 09:01:10.083308 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="d8596785-f659-4038-ac9a-a48c9a4dbd44" Feb 04 09:01:10 crc kubenswrapper[4644]: E0204 09:01:10.406696 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="d8596785-f659-4038-ac9a-a48c9a4dbd44" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.624638 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.678639 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.678938 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.678994 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsh92\" (UniqueName: \"kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.679032 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.679123 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.679154 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.679203 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts\") pod \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\" (UID: \"3ad50edf-1565-43a4-b0c6-7aef5bc98722\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.684208 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.684497 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.698825 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92" (OuterVolumeSpecName: "kube-api-access-jsh92") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "kube-api-access-jsh92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.707181 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts" (OuterVolumeSpecName: "scripts") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.772504 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.786421 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.786447 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.786455 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.786465 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsh92\" (UniqueName: \"kubernetes.io/projected/3ad50edf-1565-43a4-b0c6-7aef5bc98722-kube-api-access-jsh92\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.786474 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad50edf-1565-43a4-b0c6-7aef5bc98722-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.849601 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data" (OuterVolumeSpecName: "config-data") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.858088 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.887844 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.887948 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.887987 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888059 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kfm9\" (UniqueName: \"kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888124 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888218 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888236 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888262 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.888668 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.889077 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.889098 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs" (OuterVolumeSpecName: "logs") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.921058 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9" (OuterVolumeSpecName: "kube-api-access-5kfm9") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "kube-api-access-5kfm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.925607 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.944268 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts" (OuterVolumeSpecName: "scripts") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.954482 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad50edf-1565-43a4-b0c6-7aef5bc98722" (UID: "3ad50edf-1565-43a4-b0c6-7aef5bc98722"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.983599 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.990973 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data" (OuterVolumeSpecName: "config-data") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991123 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") pod \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\" (UID: \"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f\") " Feb 04 09:01:10 crc kubenswrapper[4644]: W0204 09:01:10.991610 4644 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f/volumes/kubernetes.io~secret/config-data Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991626 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data" (OuterVolumeSpecName: "config-data") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991844 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991863 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad50edf-1565-43a4-b0c6-7aef5bc98722-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991871 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991879 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991887 4644 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991896 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991917 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 04 09:01:10 crc kubenswrapper[4644]: I0204 09:01:10.991926 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kfm9\" (UniqueName: \"kubernetes.io/projected/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-kube-api-access-5kfm9\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.023144 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" (UID: "fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.023271 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.093716 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.093750 4644 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.200259 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503261-bsnnv"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.200489 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/2.log" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.200871 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:01:11 crc kubenswrapper[4644]: W0204 09:01:11.204148 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc72f9f7_839f_402b_9576_e9daf7ed4d5b.slice/crio-0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d WatchSource:0}: Error finding container 0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d: Status 404 returned error can't find the container with id 0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.296977 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs\") pod \"2cbe3b5d-7379-447e-acac-6f7306ce230f\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.297182 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle\") pod \"2cbe3b5d-7379-447e-acac-6f7306ce230f\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.297204 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config\") pod \"2cbe3b5d-7379-447e-acac-6f7306ce230f\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.297231 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflsz\" (UniqueName: \"kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz\") pod \"2cbe3b5d-7379-447e-acac-6f7306ce230f\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.297272 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config\") pod \"2cbe3b5d-7379-447e-acac-6f7306ce230f\" (UID: \"2cbe3b5d-7379-447e-acac-6f7306ce230f\") " Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.301613 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2cbe3b5d-7379-447e-acac-6f7306ce230f" (UID: "2cbe3b5d-7379-447e-acac-6f7306ce230f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.308062 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz" (OuterVolumeSpecName: "kube-api-access-kflsz") pod "2cbe3b5d-7379-447e-acac-6f7306ce230f" (UID: "2cbe3b5d-7379-447e-acac-6f7306ce230f"). InnerVolumeSpecName "kube-api-access-kflsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.393443 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69d495f767-hzkrb"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.402535 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kflsz\" (UniqueName: \"kubernetes.io/projected/2cbe3b5d-7379-447e-acac-6f7306ce230f-kube-api-access-kflsz\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.402571 4644 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.454769 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad50edf-1565-43a4-b0c6-7aef5bc98722","Type":"ContainerDied","Data":"0f0d7ea32a29835e75b900c128986ccd36e15ca782ee718a52f1bb43f3a0dcd2"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.454990 4644 scope.go:117] "RemoveContainer" containerID="bd06bbde6b0ee3fa4daf18441153d9637862bf4c638420d2f86ff11913f8fd1f" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.455168 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.465551 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cbe3b5d-7379-447e-acac-6f7306ce230f" (UID: "2cbe3b5d-7379-447e-acac-6f7306ce230f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.475476 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2cbe3b5d-7379-447e-acac-6f7306ce230f" (UID: "2cbe3b5d-7379-447e-acac-6f7306ce230f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.477262 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerStarted","Data":"81f16fdc203970319ecb23e7416371815a2c6e54217d6c427b1e56240753ad09"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.505658 4644 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.505928 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.507391 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f","Type":"ContainerDied","Data":"6d619b6dae6b0cdb6da7ec38cee601e7e206204b53204de9e29827c464e8e341"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.507607 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.515603 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503261-bsnnv" event={"ID":"cc72f9f7-839f-402b-9576-e9daf7ed4d5b","Type":"ContainerStarted","Data":"0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.519207 4644 scope.go:117] "RemoveContainer" containerID="97630ef708deaa5a2cd22a7967df419a0ba4b059ee7f9db1a41d9703d870ca0a" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.520878 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5df75db7c8-8lxlc_2cbe3b5d-7379-447e-acac-6f7306ce230f/neutron-httpd/2.log" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.522110 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5df75db7c8-8lxlc" event={"ID":"2cbe3b5d-7379-447e-acac-6f7306ce230f","Type":"ContainerDied","Data":"eb53fc54c789f26045569ab706f7c08d2a759b5c392eea86b7f5e34d46cf936a"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.522279 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5df75db7c8-8lxlc" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.540852 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658bfcb544-88gj4" event={"ID":"676db25f-e0ad-48cc-af2c-88029d6eb80d","Type":"ContainerStarted","Data":"f01a8b8dbe870844e14ee06a8b4dba193aaaac4fb5eda1b69cc4f024e76843d5"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.557736 4644 scope.go:117] "RemoveContainer" containerID="db929804af7d7d1f8ca9dbd96ece1821b498c830df60c15cf136a78965e102a5" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.561278 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69d495f767-hzkrb" event={"ID":"6a04a95b-5411-483c-a0de-408fa44500e0","Type":"ContainerStarted","Data":"426d914984c283b9d88aa8e07ac8682c3195d1825fb3c81e8d8ae09ad70535d8"} Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.562642 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.590442 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.612887 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.613478 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="sg-core" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.613617 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="sg-core" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.613692 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.613883 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.613961 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="proxy-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614030 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="proxy-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614111 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614187 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614242 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-notification-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614311 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-notification-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614421 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-central-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614491 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-central-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614546 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614623 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614701 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-api" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.614757 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-api" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.614969 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-log" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615035 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-log" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615347 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="sg-core" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615416 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-notification-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615479 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615565 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615622 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="ceilometer-central-agent" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615673 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615731 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-api" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615783 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" containerName="proxy-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.615842 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" containerName="glance-log" Feb 04 09:01:11 crc kubenswrapper[4644]: E0204 09:01:11.616048 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.616107 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.616340 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" containerName="neutron-httpd" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.618202 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.621499 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.621625 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.628736 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.676695 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.678226 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config" (OuterVolumeSpecName: "config") pod "2cbe3b5d-7379-447e-acac-6f7306ce230f" (UID: "2cbe3b5d-7379-447e-acac-6f7306ce230f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.679125 4644 scope.go:117] "RemoveContainer" containerID="624434945be32abda71164d79ad698fbaa08ec31dc56b12297b43697c9870b12" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.688436 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.716091 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.716193 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cbe3b5d-7379-447e-acac-6f7306ce230f-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.743306 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.748895 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.752954 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.754705 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.768599 4644 scope.go:117] "RemoveContainer" containerID="c0076b247a60824499e73ef429b4bc6df4a0f9f582e714949ebeefbab1dd57db" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.781374 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.821521 4644 scope.go:117] "RemoveContainer" containerID="b0a601c8952550240f5fe1c6167df25d68c6543a1cbc4347b12d4db6219af241" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.822566 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.841091 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc746\" (UniqueName: \"kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.843382 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.843775 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.844623 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.844757 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.844961 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.863916 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.886377 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.891348 4644 scope.go:117] "RemoveContainer" containerID="88a0f36a4b4008a2b674eb932a68e1369d7c2e1867c83679c53ae614b95df111" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.900142 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5df75db7c8-8lxlc"] Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.946971 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947257 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947399 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947534 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947674 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947818 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.947943 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc746\" (UniqueName: \"kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948045 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948198 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948470 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948584 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948719 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948849 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.948979 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgsjz\" (UniqueName: \"kubernetes.io/projected/1fda6114-8d44-49ba-b30e-8ce9233f4b33-kube-api-access-sgsjz\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.949957 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.950255 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.962107 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.962181 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.962264 4644 scope.go:117] "RemoveContainer" containerID="8f400303386c89b5f4915ede18ee680310e248ff45456398b81f1abc25bf895d" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.964058 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:11 crc kubenswrapper[4644]: I0204 09:01:11.971123 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc746\" (UniqueName: \"kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746\") pod \"ceilometer-0\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " pod="openstack/ceilometer-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050360 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050433 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgsjz\" (UniqueName: \"kubernetes.io/projected/1fda6114-8d44-49ba-b30e-8ce9233f4b33-kube-api-access-sgsjz\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050471 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050489 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050505 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050528 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050575 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.050598 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.051791 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.053990 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.054238 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fda6114-8d44-49ba-b30e-8ce9233f4b33-logs\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.054685 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.065607 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.066116 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.076013 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fda6114-8d44-49ba-b30e-8ce9233f4b33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.084210 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgsjz\" (UniqueName: \"kubernetes.io/projected/1fda6114-8d44-49ba-b30e-8ce9233f4b33-kube-api-access-sgsjz\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.111396 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1fda6114-8d44-49ba-b30e-8ce9233f4b33\") " pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.248055 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.407022 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.585796 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69d495f767-hzkrb" event={"ID":"6a04a95b-5411-483c-a0de-408fa44500e0","Type":"ContainerStarted","Data":"ee704dca61a7476fa9d6853e1a25659ff09b71ba194c736c1feda7c2a4c9a1f0"} Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.586004 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69d495f767-hzkrb" event={"ID":"6a04a95b-5411-483c-a0de-408fa44500e0","Type":"ContainerStarted","Data":"e04b502f0e10c9581da190bdecfbf4aa22c64c479a9bf663c9773a0031b95010"} Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.587033 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.587063 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.628450 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-69d495f767-hzkrb" podStartSLOduration=12.628426777 podStartE2EDuration="12.628426777s" podCreationTimestamp="2026-02-04 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:12.622257773 +0000 UTC m=+1182.662315528" watchObservedRunningTime="2026-02-04 09:01:12.628426777 +0000 UTC m=+1182.668484532" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.638051 4644 generic.go:334] "Generic (PLEG): container finished" podID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerID="c88267f2681a46d2aa4025afe5a03dafedb80238e4041bd520cda47a5fd6a8ba" exitCode=0 Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.638210 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerDied","Data":"c88267f2681a46d2aa4025afe5a03dafedb80238e4041bd520cda47a5fd6a8ba"} Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.664299 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503261-bsnnv" event={"ID":"cc72f9f7-839f-402b-9576-e9daf7ed4d5b","Type":"ContainerStarted","Data":"abb0c8c4568734740893dbf6f9c4393142983dba979205acc0996a2e7dd3e90a"} Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.678212 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbe3b5d-7379-447e-acac-6f7306ce230f" path="/var/lib/kubelet/pods/2cbe3b5d-7379-447e-acac-6f7306ce230f/volumes" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.678886 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad50edf-1565-43a4-b0c6-7aef5bc98722" path="/var/lib/kubelet/pods/3ad50edf-1565-43a4-b0c6-7aef5bc98722/volumes" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.684433 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f" path="/var/lib/kubelet/pods/fe1d4d3f-cc34-4ea4-94e0-cd3ef193b28f/volumes" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.698418 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29503261-bsnnv" podStartSLOduration=12.698396179 podStartE2EDuration="12.698396179s" podCreationTimestamp="2026-02-04 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:12.689700627 +0000 UTC m=+1182.729758382" watchObservedRunningTime="2026-02-04 09:01:12.698396179 +0000 UTC m=+1182.738453934" Feb 04 09:01:12 crc kubenswrapper[4644]: I0204 09:01:12.929183 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:12 crc kubenswrapper[4644]: W0204 09:01:12.977368 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51af5185_040f_471e_8edf_3b7d27c84a5c.slice/crio-1cb5cb713bbea31310a92408b5a75823c1461227a1c8d95c1383195afab110bc WatchSource:0}: Error finding container 1cb5cb713bbea31310a92408b5a75823c1461227a1c8d95c1383195afab110bc: Status 404 returned error can't find the container with id 1cb5cb713bbea31310a92408b5a75823c1461227a1c8d95c1383195afab110bc Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.248779 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.252431 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.277947 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278012 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278047 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278093 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgfs\" (UniqueName: \"kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278160 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278234 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278267 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.278405 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1590cd61-9cd4-479e-9ba8-d323890eecc0\" (UID: \"1590cd61-9cd4-479e-9ba8-d323890eecc0\") " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.294572 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs" (OuterVolumeSpecName: "logs") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.294803 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.308532 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.309257 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs" (OuterVolumeSpecName: "kube-api-access-phgfs") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "kube-api-access-phgfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.309421 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.312014 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts" (OuterVolumeSpecName: "scripts") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.386630 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.386665 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgfs\" (UniqueName: \"kubernetes.io/projected/1590cd61-9cd4-479e-9ba8-d323890eecc0-kube-api-access-phgfs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.386676 4644 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.386684 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1590cd61-9cd4-479e-9ba8-d323890eecc0-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.386705 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.415585 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.425627 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.490413 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.490446 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.521697 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.570659 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data" (OuterVolumeSpecName: "config-data") pod "1590cd61-9cd4-479e-9ba8-d323890eecc0" (UID: "1590cd61-9cd4-479e-9ba8-d323890eecc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.593688 4644 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.593733 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1590cd61-9cd4-479e-9ba8-d323890eecc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.727854 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1590cd61-9cd4-479e-9ba8-d323890eecc0","Type":"ContainerDied","Data":"317745156bbebf7f7212f2187a6e1940aeb5073832646b199ed801ad9d19ee1a"} Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.727903 4644 scope.go:117] "RemoveContainer" containerID="c88267f2681a46d2aa4025afe5a03dafedb80238e4041bd520cda47a5fd6a8ba" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.728036 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.746819 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerStarted","Data":"1cb5cb713bbea31310a92408b5a75823c1461227a1c8d95c1383195afab110bc"} Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.807377 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.829386 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fda6114-8d44-49ba-b30e-8ce9233f4b33","Type":"ContainerStarted","Data":"bb1e24f1ea9a2f7aded84ac06d9d681a41e7c47a6ca1fde4d4eeb2ed78d488bb"} Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.834588 4644 scope.go:117] "RemoveContainer" containerID="f9608a5e50f74d623dd19b8d3c553a47020128c3f555e1ab6c2d0bf05a114380" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.847765 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.930397 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:13 crc kubenswrapper[4644]: E0204 09:01:13.930975 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-log" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.931073 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-log" Feb 04 09:01:13 crc kubenswrapper[4644]: E0204 09:01:13.931131 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-httpd" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.931180 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-httpd" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.931524 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-httpd" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.931587 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" containerName="glance-log" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.932545 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.936263 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.936549 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 04 09:01:13 crc kubenswrapper[4644]: I0204 09:01:13.958200 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.120396 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hz4v\" (UniqueName: \"kubernetes.io/projected/c737ef12-0ce6-47d8-9773-0244eff8200b-kube-api-access-8hz4v\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.120749 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.120826 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.120892 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.120929 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.121220 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-logs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.121253 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.121383 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222740 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222795 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hz4v\" (UniqueName: \"kubernetes.io/projected/c737ef12-0ce6-47d8-9773-0244eff8200b-kube-api-access-8hz4v\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222815 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222859 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222888 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222908 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222954 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-logs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.222976 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.223535 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.228665 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.232574 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c737ef12-0ce6-47d8-9773-0244eff8200b-logs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.233111 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.236817 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.250941 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.251068 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c737ef12-0ce6-47d8-9773-0244eff8200b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.261039 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hz4v\" (UniqueName: \"kubernetes.io/projected/c737ef12-0ce6-47d8-9773-0244eff8200b-kube-api-access-8hz4v\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.280968 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c737ef12-0ce6-47d8-9773-0244eff8200b\") " pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.291048 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.678833 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1590cd61-9cd4-479e-9ba8-d323890eecc0" path="/var/lib/kubelet/pods/1590cd61-9cd4-479e-9ba8-d323890eecc0/volumes" Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.846000 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerStarted","Data":"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29"} Feb 04 09:01:14 crc kubenswrapper[4644]: I0204 09:01:14.848491 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fda6114-8d44-49ba-b30e-8ce9233f4b33","Type":"ContainerStarted","Data":"b97b8e276901ca5394fd889ae778257d94552fb725d1c6b68999fac6317073c5"} Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.135072 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 09:01:15 crc kubenswrapper[4644]: W0204 09:01:15.140557 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc737ef12_0ce6_47d8_9773_0244eff8200b.slice/crio-e80e984c40a7abff2599d28578691591b4bbc14dbd6bd8597d5b2ff2fd5b5e21 WatchSource:0}: Error finding container e80e984c40a7abff2599d28578691591b4bbc14dbd6bd8597d5b2ff2fd5b5e21: Status 404 returned error can't find the container with id e80e984c40a7abff2599d28578691591b4bbc14dbd6bd8597d5b2ff2fd5b5e21 Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.597486 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.705245 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.705788 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.705888 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.705972 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.706078 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.706144 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.706270 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5dr\" (UniqueName: \"kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr\") pod \"aaf68dbc-3305-4745-b403-c5426622f8ed\" (UID: \"aaf68dbc-3305-4745-b403-c5426622f8ed\") " Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.711500 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.713638 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs" (OuterVolumeSpecName: "logs") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.726638 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr" (OuterVolumeSpecName: "kube-api-access-mz5dr") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "kube-api-access-mz5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.728622 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.735307 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts" (OuterVolumeSpecName: "scripts") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.772506 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811047 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811092 4644 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811104 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf68dbc-3305-4745-b403-c5426622f8ed-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811117 4644 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaf68dbc-3305-4745-b403-c5426622f8ed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811128 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.811158 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5dr\" (UniqueName: \"kubernetes.io/projected/aaf68dbc-3305-4745-b403-c5426622f8ed-kube-api-access-mz5dr\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.835860 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data" (OuterVolumeSpecName: "config-data") pod "aaf68dbc-3305-4745-b403-c5426622f8ed" (UID: "aaf68dbc-3305-4745-b403-c5426622f8ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.875407 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" probeResult="failure" output=< Feb 04 09:01:15 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:01:15 crc kubenswrapper[4644]: > Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.901650 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerStarted","Data":"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352"} Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.912794 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf68dbc-3305-4745-b403-c5426622f8ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.913962 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c737ef12-0ce6-47d8-9773-0244eff8200b","Type":"ContainerStarted","Data":"e80e984c40a7abff2599d28578691591b4bbc14dbd6bd8597d5b2ff2fd5b5e21"} Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.934787 4644 generic.go:334] "Generic (PLEG): container finished" podID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerID="0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa" exitCode=137 Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.934832 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerDied","Data":"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa"} Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.934860 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aaf68dbc-3305-4745-b403-c5426622f8ed","Type":"ContainerDied","Data":"12995f62c05743c6b8d1d4af7d9abd4d6a681c0f9a62a2e746acc1d15cbdbfa0"} Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.934877 4644 scope.go:117] "RemoveContainer" containerID="0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.935001 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:01:15 crc kubenswrapper[4644]: I0204 09:01:15.996418 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.003229 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.022114 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:01:16 crc kubenswrapper[4644]: E0204 09:01:16.022607 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api-log" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.022627 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api-log" Feb 04 09:01:16 crc kubenswrapper[4644]: E0204 09:01:16.022649 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.022656 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.022824 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api-log" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.022843 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" containerName="cinder-api" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.025909 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.027699 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.038880 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.039106 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.087618 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119129 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96cs\" (UniqueName: \"kubernetes.io/projected/4109caeb-65a7-4c6b-b09c-83da593a1ef2-kube-api-access-g96cs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data-custom\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119491 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119526 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119598 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119615 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4109caeb-65a7-4c6b-b09c-83da593a1ef2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119647 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4109caeb-65a7-4c6b-b09c-83da593a1ef2-logs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119665 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-scripts\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.119696 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.162776 4644 scope.go:117] "RemoveContainer" containerID="f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.222482 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.222721 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.222938 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223052 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4109caeb-65a7-4c6b-b09c-83da593a1ef2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223225 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4109caeb-65a7-4c6b-b09c-83da593a1ef2-logs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223348 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-scripts\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223488 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223625 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96cs\" (UniqueName: \"kubernetes.io/projected/4109caeb-65a7-4c6b-b09c-83da593a1ef2-kube-api-access-g96cs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.223745 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data-custom\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.225517 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4109caeb-65a7-4c6b-b09c-83da593a1ef2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.225901 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4109caeb-65a7-4c6b-b09c-83da593a1ef2-logs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.259931 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.260277 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.261785 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.268063 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-config-data-custom\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.281009 4644 scope.go:117] "RemoveContainer" containerID="0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.282049 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96cs\" (UniqueName: \"kubernetes.io/projected/4109caeb-65a7-4c6b-b09c-83da593a1ef2-kube-api-access-g96cs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: E0204 09:01:16.283061 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa\": container with ID starting with 0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa not found: ID does not exist" containerID="0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.283170 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa"} err="failed to get container status \"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa\": rpc error: code = NotFound desc = could not find container \"0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa\": container with ID starting with 0bc621bc1174064f4bc6719128c19780fac1689570dcf197a950081c458dcfaa not found: ID does not exist" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.283280 4644 scope.go:117] "RemoveContainer" containerID="f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.284377 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-scripts\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.286975 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4109caeb-65a7-4c6b-b09c-83da593a1ef2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4109caeb-65a7-4c6b-b09c-83da593a1ef2\") " pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: E0204 09:01:16.287363 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516\": container with ID starting with f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516 not found: ID does not exist" containerID="f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.287445 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516"} err="failed to get container status \"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516\": rpc error: code = NotFound desc = could not find container \"f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516\": container with ID starting with f54103c71497c11b318336824239516f367bca780705e6482cacd5732edd7516 not found: ID does not exist" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.359841 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.613759 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.616295 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.682009 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf68dbc-3305-4745-b403-c5426622f8ed" path="/var/lib/kubelet/pods/aaf68dbc-3305-4745-b403-c5426622f8ed/volumes" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.688474 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.688529 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.961781 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1fda6114-8d44-49ba-b30e-8ce9233f4b33","Type":"ContainerStarted","Data":"5b898f049dfea4c76582a7c802f671d18da10d6768948f9d94500a1df4f85890"} Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.964461 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c737ef12-0ce6-47d8-9773-0244eff8200b","Type":"ContainerStarted","Data":"0b7f47098137edc83fa8894bf8aa90a2065c5d5a2abf041951ca62ac77c038bf"} Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.970400 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerStarted","Data":"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc"} Feb 04 09:01:16 crc kubenswrapper[4644]: I0204 09:01:16.988498 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.988476916 podStartE2EDuration="5.988476916s" podCreationTimestamp="2026-02-04 09:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:16.985486426 +0000 UTC m=+1187.025544181" watchObservedRunningTime="2026-02-04 09:01:16.988476916 +0000 UTC m=+1187.028534681" Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.023903 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 09:01:17 crc kubenswrapper[4644]: W0204 09:01:17.036058 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4109caeb_65a7_4c6b_b09c_83da593a1ef2.slice/crio-a210df807e0bf124b39e7b4f4c1884243e78d784a2cffa9bed33c6e1825eb68c WatchSource:0}: Error finding container a210df807e0bf124b39e7b4f4c1884243e78d784a2cffa9bed33c6e1825eb68c: Status 404 returned error can't find the container with id a210df807e0bf124b39e7b4f4c1884243e78d784a2cffa9bed33c6e1825eb68c Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.982291 4644 generic.go:334] "Generic (PLEG): container finished" podID="cc72f9f7-839f-402b-9576-e9daf7ed4d5b" containerID="abb0c8c4568734740893dbf6f9c4393142983dba979205acc0996a2e7dd3e90a" exitCode=0 Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.982436 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503261-bsnnv" event={"ID":"cc72f9f7-839f-402b-9576-e9daf7ed4d5b","Type":"ContainerDied","Data":"abb0c8c4568734740893dbf6f9c4393142983dba979205acc0996a2e7dd3e90a"} Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.988237 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c737ef12-0ce6-47d8-9773-0244eff8200b","Type":"ContainerStarted","Data":"bf3315f745d26513017815977939814705e68e1eaf63be5fa28ecda31b75e850"} Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.997992 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4109caeb-65a7-4c6b-b09c-83da593a1ef2","Type":"ContainerStarted","Data":"adc9bb4a52535fc97d187a30f64afa2377397aba3bf8a1f0a7740dd6bc1b0191"} Feb 04 09:01:17 crc kubenswrapper[4644]: I0204 09:01:17.998049 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4109caeb-65a7-4c6b-b09c-83da593a1ef2","Type":"ContainerStarted","Data":"a210df807e0bf124b39e7b4f4c1884243e78d784a2cffa9bed33c6e1825eb68c"} Feb 04 09:01:18 crc kubenswrapper[4644]: I0204 09:01:18.043612 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.043592265 podStartE2EDuration="5.043592265s" podCreationTimestamp="2026-02-04 09:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:18.038839288 +0000 UTC m=+1188.078897053" watchObservedRunningTime="2026-02-04 09:01:18.043592265 +0000 UTC m=+1188.083650020" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.027217 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4109caeb-65a7-4c6b-b09c-83da593a1ef2","Type":"ContainerStarted","Data":"98981d50d3b8306133d2c40bcd539f1f78b85df8defbcd63acdc3f7cc5a32898"} Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.028574 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.046400 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.046379452 podStartE2EDuration="4.046379452s" podCreationTimestamp="2026-02-04 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:19.043512695 +0000 UTC m=+1189.083570470" watchObservedRunningTime="2026-02-04 09:01:19.046379452 +0000 UTC m=+1189.086437207" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.444786 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.621628 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data\") pod \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.621784 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck\") pod \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.621824 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys\") pod \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.621894 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle\") pod \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\" (UID: \"cc72f9f7-839f-402b-9576-e9daf7ed4d5b\") " Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.636019 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cc72f9f7-839f-402b-9576-e9daf7ed4d5b" (UID: "cc72f9f7-839f-402b-9576-e9daf7ed4d5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.648571 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck" (OuterVolumeSpecName: "kube-api-access-jqtck") pod "cc72f9f7-839f-402b-9576-e9daf7ed4d5b" (UID: "cc72f9f7-839f-402b-9576-e9daf7ed4d5b"). InnerVolumeSpecName "kube-api-access-jqtck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.664847 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc72f9f7-839f-402b-9576-e9daf7ed4d5b" (UID: "cc72f9f7-839f-402b-9576-e9daf7ed4d5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.709072 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data" (OuterVolumeSpecName: "config-data") pod "cc72f9f7-839f-402b-9576-e9daf7ed4d5b" (UID: "cc72f9f7-839f-402b-9576-e9daf7ed4d5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.723931 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-kube-api-access-jqtck\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.723966 4644 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.723979 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:19 crc kubenswrapper[4644]: I0204 09:01:19.723989 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc72f9f7-839f-402b-9576-e9daf7ed4d5b-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046357 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerStarted","Data":"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046"} Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046566 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-central-agent" containerID="cri-o://2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29" gracePeriod=30 Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046645 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="sg-core" containerID="cri-o://e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc" gracePeriod=30 Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046641 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="proxy-httpd" containerID="cri-o://1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046" gracePeriod=30 Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046682 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-notification-agent" containerID="cri-o://8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352" gracePeriod=30 Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.046675 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.053008 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503261-bsnnv" Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.053657 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503261-bsnnv" event={"ID":"cc72f9f7-839f-402b-9576-e9daf7ed4d5b","Type":"ContainerDied","Data":"0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d"} Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.053692 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e0f8c900f976371251119a4a2a3242684b13554492ac4b31f497ea39a49205d" Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.951212 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:20 crc kubenswrapper[4644]: I0204 09:01:20.951822 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69d495f767-hzkrb" Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.017959 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.090235209 podStartE2EDuration="10.017935141s" podCreationTimestamp="2026-02-04 09:01:11 +0000 UTC" firstStartedPulling="2026-02-04 09:01:12.979862596 +0000 UTC m=+1183.019920341" lastFinishedPulling="2026-02-04 09:01:18.907562518 +0000 UTC m=+1188.947620273" observedRunningTime="2026-02-04 09:01:20.0845739 +0000 UTC m=+1190.124631655" watchObservedRunningTime="2026-02-04 09:01:21.017935141 +0000 UTC m=+1191.057992906" Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.092865 4644 generic.go:334] "Generic (PLEG): container finished" podID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerID="1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046" exitCode=0 Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.092934 4644 generic.go:334] "Generic (PLEG): container finished" podID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerID="e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc" exitCode=2 Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.092946 4644 generic.go:334] "Generic (PLEG): container finished" podID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerID="8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352" exitCode=0 Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.094035 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerDied","Data":"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046"} Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.094071 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerDied","Data":"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc"} Feb 04 09:01:21 crc kubenswrapper[4644]: I0204 09:01:21.094090 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerDied","Data":"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352"} Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.034071 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.105105 4644 generic.go:334] "Generic (PLEG): container finished" podID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerID="2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29" exitCode=0 Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.105144 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerDied","Data":"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29"} Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.105169 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51af5185-040f-471e-8edf-3b7d27c84a5c","Type":"ContainerDied","Data":"1cb5cb713bbea31310a92408b5a75823c1461227a1c8d95c1383195afab110bc"} Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.105185 4644 scope.go:117] "RemoveContainer" containerID="1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.105295 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.140551 4644 scope.go:117] "RemoveContainer" containerID="e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.175570 4644 scope.go:117] "RemoveContainer" containerID="8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.186965 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187078 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187111 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187160 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187192 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187249 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc746\" (UniqueName: \"kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187308 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts\") pod \"51af5185-040f-471e-8edf-3b7d27c84a5c\" (UID: \"51af5185-040f-471e-8edf-3b7d27c84a5c\") " Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.187801 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.188883 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.189911 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.189928 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51af5185-040f-471e-8edf-3b7d27c84a5c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.194957 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts" (OuterVolumeSpecName: "scripts") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.198799 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746" (OuterVolumeSpecName: "kube-api-access-wc746") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "kube-api-access-wc746". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.231555 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.296185 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.296225 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc746\" (UniqueName: \"kubernetes.io/projected/51af5185-040f-471e-8edf-3b7d27c84a5c-kube-api-access-wc746\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.296239 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.332618 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data" (OuterVolumeSpecName: "config-data") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.334322 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51af5185-040f-471e-8edf-3b7d27c84a5c" (UID: "51af5185-040f-471e-8edf-3b7d27c84a5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.397984 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.398024 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51af5185-040f-471e-8edf-3b7d27c84a5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.408400 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.408456 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.456932 4644 scope.go:117] "RemoveContainer" containerID="2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.463228 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.470972 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.500802 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.501143 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-notification-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501160 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-notification-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.501175 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="proxy-httpd" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501181 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="proxy-httpd" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.501206 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc72f9f7-839f-402b-9576-e9daf7ed4d5b" containerName="keystone-cron" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501214 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc72f9f7-839f-402b-9576-e9daf7ed4d5b" containerName="keystone-cron" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.501232 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-central-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501238 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-central-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.501254 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="sg-core" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501262 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="sg-core" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501438 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="proxy-httpd" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501461 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="sg-core" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501471 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc72f9f7-839f-402b-9576-e9daf7ed4d5b" containerName="keystone-cron" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501482 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-notification-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501492 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" containerName="ceilometer-central-agent" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.501970 4644 scope.go:117] "RemoveContainer" containerID="1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.502804 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046\": container with ID starting with 1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046 not found: ID does not exist" containerID="1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.502921 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046"} err="failed to get container status \"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046\": rpc error: code = NotFound desc = could not find container \"1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046\": container with ID starting with 1360142c2948e805ee94246eb3d6071ff4878d2894b2e19454ee888ec289e046 not found: ID does not exist" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.503020 4644 scope.go:117] "RemoveContainer" containerID="e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.503460 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.505851 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc\": container with ID starting with e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc not found: ID does not exist" containerID="e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.505886 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc"} err="failed to get container status \"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc\": rpc error: code = NotFound desc = could not find container \"e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc\": container with ID starting with e0e7a8c93d9a0b531de443093887081906ab04b884c957e2e408cdcb41a068dc not found: ID does not exist" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.505915 4644 scope.go:117] "RemoveContainer" containerID="8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.506617 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352\": container with ID starting with 8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352 not found: ID does not exist" containerID="8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.506665 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352"} err="failed to get container status \"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352\": rpc error: code = NotFound desc = could not find container \"8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352\": container with ID starting with 8202fdfc9737a816573e7d4316b884b04cbbb788898815ad138b426d3dc99352 not found: ID does not exist" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.506690 4644 scope.go:117] "RemoveContainer" containerID="2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29" Feb 04 09:01:22 crc kubenswrapper[4644]: E0204 09:01:22.507043 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29\": container with ID starting with 2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29 not found: ID does not exist" containerID="2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.507092 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29"} err="failed to get container status \"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29\": rpc error: code = NotFound desc = could not find container \"2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29\": container with ID starting with 2eda5617bcbdff0f6633d325c0182478d3d69586859c781e8ffe31d106deee29 not found: ID does not exist" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.508396 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.508653 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.521762 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.558763 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.564772 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.670560 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51af5185-040f-471e-8edf-3b7d27c84a5c" path="/var/lib/kubelet/pods/51af5185-040f-471e-8edf-3b7d27c84a5c/volumes" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707140 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707222 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707265 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707282 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707305 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.707349 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7whtf\" (UniqueName: \"kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.809659 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.809714 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.809856 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.809951 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.809987 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.810021 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.810059 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7whtf\" (UniqueName: \"kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.814287 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.814715 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.820806 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.822667 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.824416 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.835662 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.837293 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7whtf\" (UniqueName: \"kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf\") pod \"ceilometer-0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " pod="openstack/ceilometer-0" Feb 04 09:01:22 crc kubenswrapper[4644]: I0204 09:01:22.874538 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:23 crc kubenswrapper[4644]: I0204 09:01:23.123321 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:23 crc kubenswrapper[4644]: I0204 09:01:23.123653 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:23 crc kubenswrapper[4644]: I0204 09:01:23.421806 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:23 crc kubenswrapper[4644]: W0204 09:01:23.433662 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2831415_d88f_4c19_a272_4898fa142bd0.slice/crio-b880fd69586b5a4897d8b5185a4eb8243e1965fa96ff6b81baac92fa597723ed WatchSource:0}: Error finding container b880fd69586b5a4897d8b5185a4eb8243e1965fa96ff6b81baac92fa597723ed: Status 404 returned error can't find the container with id b880fd69586b5a4897d8b5185a4eb8243e1965fa96ff6b81baac92fa597723ed Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.133423 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerStarted","Data":"b880fd69586b5a4897d8b5185a4eb8243e1965fa96ff6b81baac92fa597723ed"} Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.135659 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d8596785-f659-4038-ac9a-a48c9a4dbd44","Type":"ContainerStarted","Data":"9c1d53cb0381f3abf48fc71f34c5110f13084b23f1c6d34c31204f14488c7a5d"} Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.164531 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.571132899 podStartE2EDuration="35.164509429s" podCreationTimestamp="2026-02-04 09:00:49 +0000 UTC" firstStartedPulling="2026-02-04 09:00:50.605772457 +0000 UTC m=+1160.645830212" lastFinishedPulling="2026-02-04 09:01:23.199148997 +0000 UTC m=+1193.239206742" observedRunningTime="2026-02-04 09:01:24.15592002 +0000 UTC m=+1194.195977775" watchObservedRunningTime="2026-02-04 09:01:24.164509429 +0000 UTC m=+1194.204567184" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.291892 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.291938 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.325936 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.342670 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.916867 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:01:24 crc kubenswrapper[4644]: I0204 09:01:24.974507 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.146845 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerStarted","Data":"18af029268efa27f781e3f91f3cc5729c59b0c00ed1000c22623524efb85f970"} Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.146897 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.146911 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.146927 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.147381 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.158227 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 09:01:25 crc kubenswrapper[4644]: I0204 09:01:25.515141 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.155620 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lffcr" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" containerID="cri-o://8a2e5fd22e7f97bf61e75155bb89cfc69a8e13418a7202fab8440b576abbd81e" gracePeriod=2 Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.155834 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerStarted","Data":"27a747ec948e5e971fb5e60065603d5cfd080c275a543d09ee5fb11f00fdff9c"} Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.156017 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerStarted","Data":"a2dfe072daa067a30f643a86fe89f6eb328cfc0307a5f14750f0cbefe8e6b812"} Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.339338 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.339432 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.614993 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:01:26 crc kubenswrapper[4644]: I0204 09:01:26.674960 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.188052 4644 generic.go:334] "Generic (PLEG): container finished" podID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerID="8a2e5fd22e7f97bf61e75155bb89cfc69a8e13418a7202fab8440b576abbd81e" exitCode=0 Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.188160 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.188170 4644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.189079 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerDied","Data":"8a2e5fd22e7f97bf61e75155bb89cfc69a8e13418a7202fab8440b576abbd81e"} Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.189142 4644 scope.go:117] "RemoveContainer" containerID="4fbe46b700677b8e24ec42642dc2f2b502bcb7925bc5346a880f37eedcce140a" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.282828 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.421931 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities\") pod \"1c536da4-4882-4716-b4ae-4894dcc769e1\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.422040 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content\") pod \"1c536da4-4882-4716-b4ae-4894dcc769e1\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.422070 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbhgm\" (UniqueName: \"kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm\") pod \"1c536da4-4882-4716-b4ae-4894dcc769e1\" (UID: \"1c536da4-4882-4716-b4ae-4894dcc769e1\") " Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.422458 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities" (OuterVolumeSpecName: "utilities") pod "1c536da4-4882-4716-b4ae-4894dcc769e1" (UID: "1c536da4-4882-4716-b4ae-4894dcc769e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.467224 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.470368 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm" (OuterVolumeSpecName: "kube-api-access-vbhgm") pod "1c536da4-4882-4716-b4ae-4894dcc769e1" (UID: "1c536da4-4882-4716-b4ae-4894dcc769e1"). InnerVolumeSpecName "kube-api-access-vbhgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.542648 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.542671 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbhgm\" (UniqueName: \"kubernetes.io/projected/1c536da4-4882-4716-b4ae-4894dcc769e1-kube-api-access-vbhgm\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.619376 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c536da4-4882-4716-b4ae-4894dcc769e1" (UID: "1c536da4-4882-4716-b4ae-4894dcc769e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:27 crc kubenswrapper[4644]: I0204 09:01:27.645570 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c536da4-4882-4716-b4ae-4894dcc769e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.199112 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lffcr" event={"ID":"1c536da4-4882-4716-b4ae-4894dcc769e1","Type":"ContainerDied","Data":"4dca71096204f8c3867ab60eb54e52b8e32e90d45da37044a2d9343ad52274df"} Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.199124 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lffcr" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.200095 4644 scope.go:117] "RemoveContainer" containerID="8a2e5fd22e7f97bf61e75155bb89cfc69a8e13418a7202fab8440b576abbd81e" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.211607 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerStarted","Data":"a86a224c6105fdf75148521d8aad7b3c7dd410b4673e5af950a65cab3e2385a6"} Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.211760 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-central-agent" containerID="cri-o://18af029268efa27f781e3f91f3cc5729c59b0c00ed1000c22623524efb85f970" gracePeriod=30 Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.212032 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.212076 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="proxy-httpd" containerID="cri-o://a86a224c6105fdf75148521d8aad7b3c7dd410b4673e5af950a65cab3e2385a6" gracePeriod=30 Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.212118 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="sg-core" containerID="cri-o://27a747ec948e5e971fb5e60065603d5cfd080c275a543d09ee5fb11f00fdff9c" gracePeriod=30 Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.212150 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-notification-agent" containerID="cri-o://a2dfe072daa067a30f643a86fe89f6eb328cfc0307a5f14750f0cbefe8e6b812" gracePeriod=30 Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.258348 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.928120302 podStartE2EDuration="6.258311865s" podCreationTimestamp="2026-02-04 09:01:22 +0000 UTC" firstStartedPulling="2026-02-04 09:01:23.438626399 +0000 UTC m=+1193.478684154" lastFinishedPulling="2026-02-04 09:01:27.768817962 +0000 UTC m=+1197.808875717" observedRunningTime="2026-02-04 09:01:28.256448915 +0000 UTC m=+1198.296506680" watchObservedRunningTime="2026-02-04 09:01:28.258311865 +0000 UTC m=+1198.298369640" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.272144 4644 scope.go:117] "RemoveContainer" containerID="3e2b894654072ee56f417e213a689edecdf956aff0fec27df233f9b700e455b1" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.282929 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.293736 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lffcr"] Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.323005 4644 scope.go:117] "RemoveContainer" containerID="7e0e57a9a6a893193c143cdd7320e95a786469fa80310b6f4805b0006f7e2c02" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.671233 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" path="/var/lib/kubelet/pods/1c536da4-4882-4716-b4ae-4894dcc769e1/volumes" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.676029 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 09:01:28 crc kubenswrapper[4644]: I0204 09:01:28.676121 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.223051 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2831415-d88f-4c19-a272-4898fa142bd0" containerID="27a747ec948e5e971fb5e60065603d5cfd080c275a543d09ee5fb11f00fdff9c" exitCode=2 Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.223299 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2831415-d88f-4c19-a272-4898fa142bd0" containerID="a2dfe072daa067a30f643a86fe89f6eb328cfc0307a5f14750f0cbefe8e6b812" exitCode=0 Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.223308 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2831415-d88f-4c19-a272-4898fa142bd0" containerID="18af029268efa27f781e3f91f3cc5729c59b0c00ed1000c22623524efb85f970" exitCode=0 Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.224037 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerDied","Data":"27a747ec948e5e971fb5e60065603d5cfd080c275a543d09ee5fb11f00fdff9c"} Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.224059 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerDied","Data":"a2dfe072daa067a30f643a86fe89f6eb328cfc0307a5f14750f0cbefe8e6b812"} Feb 04 09:01:29 crc kubenswrapper[4644]: I0204 09:01:29.224072 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerDied","Data":"18af029268efa27f781e3f91f3cc5729c59b0c00ed1000c22623524efb85f970"} Feb 04 09:01:30 crc kubenswrapper[4644]: I0204 09:01:30.368553 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="4109caeb-65a7-4c6b-b09c-83da593a1ef2" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:01:30 crc kubenswrapper[4644]: I0204 09:01:30.376367 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.989413 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zffrn"] Feb 04 09:01:33 crc kubenswrapper[4644]: E0204 09:01:33.990244 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990258 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: E0204 09:01:33.990274 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="extract-content" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990282 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="extract-content" Feb 04 09:01:33 crc kubenswrapper[4644]: E0204 09:01:33.990291 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990298 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: E0204 09:01:33.990313 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="extract-utilities" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990319 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="extract-utilities" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990503 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.990514 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c536da4-4882-4716-b4ae-4894dcc769e1" containerName="registry-server" Feb 04 09:01:33 crc kubenswrapper[4644]: I0204 09:01:33.991111 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.017293 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zffrn"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.053453 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.053518 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl69s\" (UniqueName: \"kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.155482 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.155551 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl69s\" (UniqueName: \"kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.156494 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.165452 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hfqtm"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.166819 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.189267 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-09da-account-create-update-x9b8b"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.200494 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.205471 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.233674 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl69s\" (UniqueName: \"kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s\") pod \"nova-api-db-create-zffrn\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.256640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9xjp\" (UniqueName: \"kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.256899 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.256998 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrwq\" (UniqueName: \"kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.257110 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.260583 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-09da-account-create-update-x9b8b"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.300289 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hfqtm"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.344733 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.359492 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9xjp\" (UniqueName: \"kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.359554 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.359617 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrwq\" (UniqueName: \"kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.359678 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.360610 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.361047 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.387751 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vs7nh"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.389542 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.394716 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9xjp\" (UniqueName: \"kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp\") pod \"nova-api-09da-account-create-update-x9b8b\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.397795 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrwq\" (UniqueName: \"kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq\") pod \"nova-cell0-db-create-hfqtm\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.437638 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vs7nh"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.461204 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.461265 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6fw\" (UniqueName: \"kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.483657 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.486354 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f4e7-account-create-update-s9zj2"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.494439 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.498893 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.513599 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f4e7-account-create-update-s9zj2"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.564012 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.564334 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6fw\" (UniqueName: \"kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.564438 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmhr\" (UniqueName: \"kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.564461 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.565173 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.603481 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.626475 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0b44-account-create-update-td6dv"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.627790 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b44-account-create-update-td6dv"] Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.627946 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.629530 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6fw\" (UniqueName: \"kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw\") pod \"nova-cell1-db-create-vs7nh\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.630228 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.687612 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf5tm\" (UniqueName: \"kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.687807 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.688954 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmhr\" (UniqueName: \"kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.689007 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.714057 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.726800 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmhr\" (UniqueName: \"kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr\") pod \"nova-cell0-f4e7-account-create-update-s9zj2\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.791543 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf5tm\" (UniqueName: \"kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.791592 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.792277 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.805350 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.808800 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf5tm\" (UniqueName: \"kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm\") pod \"nova-cell1-0b44-account-create-update-td6dv\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:34 crc kubenswrapper[4644]: I0204 09:01:34.815004 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.002915 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zffrn"] Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.019654 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.181254 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hfqtm"] Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.276128 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hfqtm" event={"ID":"70186d10-7adc-40ec-b71b-0d9cd786d034","Type":"ContainerStarted","Data":"4304eecaa84f433991927fad18a739cd3e8c486706f0c8df95e58ee96d960142"} Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.285521 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zffrn" event={"ID":"1296c955-bd6d-4611-8bfa-abe4658610e1","Type":"ContainerStarted","Data":"cb9a0ae263dd39db0fe9f6658e9b9f527a49c50318fa5e601f0229b89da8c111"} Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.371698 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-09da-account-create-update-x9b8b"] Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.400725 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vs7nh"] Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.555858 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.556073 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.556186 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.557245 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.557400 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d" gracePeriod=600 Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.564580 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f4e7-account-create-update-s9zj2"] Feb 04 09:01:35 crc kubenswrapper[4644]: I0204 09:01:35.840644 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b44-account-create-update-td6dv"] Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.294269 4644 generic.go:334] "Generic (PLEG): container finished" podID="1296c955-bd6d-4611-8bfa-abe4658610e1" containerID="70e25a297dd5c073fec613b3aea2a3be5fdd619956b25e4b9ebf2375a7bcab58" exitCode=0 Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.294370 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zffrn" event={"ID":"1296c955-bd6d-4611-8bfa-abe4658610e1","Type":"ContainerDied","Data":"70e25a297dd5c073fec613b3aea2a3be5fdd619956b25e4b9ebf2375a7bcab58"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.296742 4644 generic.go:334] "Generic (PLEG): container finished" podID="2e675941-cfe4-450e-87e8-8f0e0e68d7de" containerID="64da4c7367a472bfa748cfe1f4cf4aa91436d4de9cbc3eb92fd8c78f64fb3efd" exitCode=0 Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.296876 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vs7nh" event={"ID":"2e675941-cfe4-450e-87e8-8f0e0e68d7de","Type":"ContainerDied","Data":"64da4c7367a472bfa748cfe1f4cf4aa91436d4de9cbc3eb92fd8c78f64fb3efd"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.296899 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vs7nh" event={"ID":"2e675941-cfe4-450e-87e8-8f0e0e68d7de","Type":"ContainerStarted","Data":"d99f7181c8ab6190bc4e93d0ee6896adbf7b8f5f9a11f9b60174356f50074499"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.298354 4644 generic.go:334] "Generic (PLEG): container finished" podID="80419f02-9077-41fe-94d7-eed2c3dbdd46" containerID="81b784ceb4237fd881f757f52eaea09b38df2fcde2ac49f50355cf7346e4f327" exitCode=0 Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.298406 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-09da-account-create-update-x9b8b" event={"ID":"80419f02-9077-41fe-94d7-eed2c3dbdd46","Type":"ContainerDied","Data":"81b784ceb4237fd881f757f52eaea09b38df2fcde2ac49f50355cf7346e4f327"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.298420 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-09da-account-create-update-x9b8b" event={"ID":"80419f02-9077-41fe-94d7-eed2c3dbdd46","Type":"ContainerStarted","Data":"7a88b51318ade4b2b35c2c84631a414f4c70776989d5f9f1efe9901f2a724550"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.299553 4644 generic.go:334] "Generic (PLEG): container finished" podID="70186d10-7adc-40ec-b71b-0d9cd786d034" containerID="be1faf799218fa365f6123a04a1ac22818c8ce75f4cea3b2eb0d058f20d1ba68" exitCode=0 Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.299593 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hfqtm" event={"ID":"70186d10-7adc-40ec-b71b-0d9cd786d034","Type":"ContainerDied","Data":"be1faf799218fa365f6123a04a1ac22818c8ce75f4cea3b2eb0d058f20d1ba68"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.302233 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d" exitCode=0 Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.302275 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.302292 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.302308 4644 scope.go:117] "RemoveContainer" containerID="d37b7ec44c6b923e084d94c0277cc27b0523c1422f5853a55c2775dc5aaf2703" Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.306895 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" event={"ID":"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe","Type":"ContainerStarted","Data":"90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.306954 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" event={"ID":"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe","Type":"ContainerStarted","Data":"0a213c21f55f2ded15adc83e695a8d53dbb9ba01099c015be07c0e1f0a640ca1"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.308791 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" event={"ID":"7dd0b3ba-0734-4358-886a-28dfc62ff494","Type":"ContainerStarted","Data":"6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.308935 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" event={"ID":"7dd0b3ba-0734-4358-886a-28dfc62ff494","Type":"ContainerStarted","Data":"4d5abe8173fcc3491cd3bfe3f5413b7210623851026d1de7977a5471802e867b"} Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.376068 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" podStartSLOduration=2.376053248 podStartE2EDuration="2.376053248s" podCreationTimestamp="2026-02-04 09:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:36.372528594 +0000 UTC m=+1206.412586349" watchObservedRunningTime="2026-02-04 09:01:36.376053248 +0000 UTC m=+1206.416111003" Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.423947 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" podStartSLOduration=2.4239275510000002 podStartE2EDuration="2.423927551s" podCreationTimestamp="2026-02-04 09:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:01:36.41446244 +0000 UTC m=+1206.454520195" watchObservedRunningTime="2026-02-04 09:01:36.423927551 +0000 UTC m=+1206.463985306" Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.614895 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:01:36 crc kubenswrapper[4644]: I0204 09:01:36.674992 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658bfcb544-88gj4" podUID="676db25f-e0ad-48cc-af2c-88029d6eb80d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.320859 4644 generic.go:334] "Generic (PLEG): container finished" podID="5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" containerID="90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056" exitCode=0 Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.320894 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" event={"ID":"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe","Type":"ContainerDied","Data":"90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056"} Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.322950 4644 generic.go:334] "Generic (PLEG): container finished" podID="7dd0b3ba-0734-4358-886a-28dfc62ff494" containerID="6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b" exitCode=0 Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.322985 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" event={"ID":"7dd0b3ba-0734-4358-886a-28dfc62ff494","Type":"ContainerDied","Data":"6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b"} Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.832471 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.899057 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9xjp\" (UniqueName: \"kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp\") pod \"80419f02-9077-41fe-94d7-eed2c3dbdd46\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.899205 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts\") pod \"80419f02-9077-41fe-94d7-eed2c3dbdd46\" (UID: \"80419f02-9077-41fe-94d7-eed2c3dbdd46\") " Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.900562 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80419f02-9077-41fe-94d7-eed2c3dbdd46" (UID: "80419f02-9077-41fe-94d7-eed2c3dbdd46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:37 crc kubenswrapper[4644]: I0204 09:01:37.909898 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp" (OuterVolumeSpecName: "kube-api-access-m9xjp") pod "80419f02-9077-41fe-94d7-eed2c3dbdd46" (UID: "80419f02-9077-41fe-94d7-eed2c3dbdd46"). InnerVolumeSpecName "kube-api-access-m9xjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.001421 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80419f02-9077-41fe-94d7-eed2c3dbdd46-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.001452 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9xjp\" (UniqueName: \"kubernetes.io/projected/80419f02-9077-41fe-94d7-eed2c3dbdd46-kube-api-access-m9xjp\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.055696 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.062004 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.067691 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204140 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts\") pod \"70186d10-7adc-40ec-b71b-0d9cd786d034\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204458 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts\") pod \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204520 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts\") pod \"1296c955-bd6d-4611-8bfa-abe4658610e1\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204561 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl69s\" (UniqueName: \"kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s\") pod \"1296c955-bd6d-4611-8bfa-abe4658610e1\" (UID: \"1296c955-bd6d-4611-8bfa-abe4658610e1\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204663 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x6fw\" (UniqueName: \"kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw\") pod \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\" (UID: \"2e675941-cfe4-450e-87e8-8f0e0e68d7de\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.204694 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrwq\" (UniqueName: \"kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq\") pod \"70186d10-7adc-40ec-b71b-0d9cd786d034\" (UID: \"70186d10-7adc-40ec-b71b-0d9cd786d034\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205195 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1296c955-bd6d-4611-8bfa-abe4658610e1" (UID: "1296c955-bd6d-4611-8bfa-abe4658610e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205202 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70186d10-7adc-40ec-b71b-0d9cd786d034" (UID: "70186d10-7adc-40ec-b71b-0d9cd786d034"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205468 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e675941-cfe4-450e-87e8-8f0e0e68d7de" (UID: "2e675941-cfe4-450e-87e8-8f0e0e68d7de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205866 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70186d10-7adc-40ec-b71b-0d9cd786d034-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205892 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e675941-cfe4-450e-87e8-8f0e0e68d7de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.205903 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1296c955-bd6d-4611-8bfa-abe4658610e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.209156 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw" (OuterVolumeSpecName: "kube-api-access-7x6fw") pod "2e675941-cfe4-450e-87e8-8f0e0e68d7de" (UID: "2e675941-cfe4-450e-87e8-8f0e0e68d7de"). InnerVolumeSpecName "kube-api-access-7x6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.209994 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s" (OuterVolumeSpecName: "kube-api-access-wl69s") pod "1296c955-bd6d-4611-8bfa-abe4658610e1" (UID: "1296c955-bd6d-4611-8bfa-abe4658610e1"). InnerVolumeSpecName "kube-api-access-wl69s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.210381 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq" (OuterVolumeSpecName: "kube-api-access-9nrwq") pod "70186d10-7adc-40ec-b71b-0d9cd786d034" (UID: "70186d10-7adc-40ec-b71b-0d9cd786d034"). InnerVolumeSpecName "kube-api-access-9nrwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.307774 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl69s\" (UniqueName: \"kubernetes.io/projected/1296c955-bd6d-4611-8bfa-abe4658610e1-kube-api-access-wl69s\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.307999 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x6fw\" (UniqueName: \"kubernetes.io/projected/2e675941-cfe4-450e-87e8-8f0e0e68d7de-kube-api-access-7x6fw\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.308080 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrwq\" (UniqueName: \"kubernetes.io/projected/70186d10-7adc-40ec-b71b-0d9cd786d034-kube-api-access-9nrwq\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.331116 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zffrn" event={"ID":"1296c955-bd6d-4611-8bfa-abe4658610e1","Type":"ContainerDied","Data":"cb9a0ae263dd39db0fe9f6658e9b9f527a49c50318fa5e601f0229b89da8c111"} Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.332551 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9a0ae263dd39db0fe9f6658e9b9f527a49c50318fa5e601f0229b89da8c111" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.331139 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zffrn" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.332761 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vs7nh" event={"ID":"2e675941-cfe4-450e-87e8-8f0e0e68d7de","Type":"ContainerDied","Data":"d99f7181c8ab6190bc4e93d0ee6896adbf7b8f5f9a11f9b60174356f50074499"} Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.332786 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99f7181c8ab6190bc4e93d0ee6896adbf7b8f5f9a11f9b60174356f50074499" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.332886 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vs7nh" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.333997 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-09da-account-create-update-x9b8b" event={"ID":"80419f02-9077-41fe-94d7-eed2c3dbdd46","Type":"ContainerDied","Data":"7a88b51318ade4b2b35c2c84631a414f4c70776989d5f9f1efe9901f2a724550"} Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.334019 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a88b51318ade4b2b35c2c84631a414f4c70776989d5f9f1efe9901f2a724550" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.334048 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-09da-account-create-update-x9b8b" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.336024 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hfqtm" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.342526 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hfqtm" event={"ID":"70186d10-7adc-40ec-b71b-0d9cd786d034","Type":"ContainerDied","Data":"4304eecaa84f433991927fad18a739cd3e8c486706f0c8df95e58ee96d960142"} Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.342549 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4304eecaa84f433991927fad18a739cd3e8c486706f0c8df95e58ee96d960142" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.644644 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.723836 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjmhr\" (UniqueName: \"kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr\") pod \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.723918 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts\") pod \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\" (UID: \"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.725445 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" (UID: "5ef1c0ca-ffc0-495d-a4d1-da74c34137fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.762484 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr" (OuterVolumeSpecName: "kube-api-access-rjmhr") pod "5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" (UID: "5ef1c0ca-ffc0-495d-a4d1-da74c34137fe"). InnerVolumeSpecName "kube-api-access-rjmhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.828462 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjmhr\" (UniqueName: \"kubernetes.io/projected/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-kube-api-access-rjmhr\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.829093 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.833782 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.929968 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf5tm\" (UniqueName: \"kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm\") pod \"7dd0b3ba-0734-4358-886a-28dfc62ff494\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.930007 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts\") pod \"7dd0b3ba-0734-4358-886a-28dfc62ff494\" (UID: \"7dd0b3ba-0734-4358-886a-28dfc62ff494\") " Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.930818 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dd0b3ba-0734-4358-886a-28dfc62ff494" (UID: "7dd0b3ba-0734-4358-886a-28dfc62ff494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:01:38 crc kubenswrapper[4644]: I0204 09:01:38.935078 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm" (OuterVolumeSpecName: "kube-api-access-jf5tm") pod "7dd0b3ba-0734-4358-886a-28dfc62ff494" (UID: "7dd0b3ba-0734-4358-886a-28dfc62ff494"). InnerVolumeSpecName "kube-api-access-jf5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.031949 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf5tm\" (UniqueName: \"kubernetes.io/projected/7dd0b3ba-0734-4358-886a-28dfc62ff494-kube-api-access-jf5tm\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.031993 4644 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd0b3ba-0734-4358-886a-28dfc62ff494-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.346355 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.346602 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f4e7-account-create-update-s9zj2" event={"ID":"5ef1c0ca-ffc0-495d-a4d1-da74c34137fe","Type":"ContainerDied","Data":"0a213c21f55f2ded15adc83e695a8d53dbb9ba01099c015be07c0e1f0a640ca1"} Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.346650 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a213c21f55f2ded15adc83e695a8d53dbb9ba01099c015be07c0e1f0a640ca1" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.348306 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" event={"ID":"7dd0b3ba-0734-4358-886a-28dfc62ff494","Type":"ContainerDied","Data":"4d5abe8173fcc3491cd3bfe3f5413b7210623851026d1de7977a5471802e867b"} Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.348364 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d5abe8173fcc3491cd3bfe3f5413b7210623851026d1de7977a5471802e867b" Feb 04 09:01:39 crc kubenswrapper[4644]: I0204 09:01:39.348487 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b44-account-create-update-td6dv" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.631371 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ng8rj"] Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.631981 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e675941-cfe4-450e-87e8-8f0e0e68d7de" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.631993 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e675941-cfe4-450e-87e8-8f0e0e68d7de" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.632009 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80419f02-9077-41fe-94d7-eed2c3dbdd46" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632015 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="80419f02-9077-41fe-94d7-eed2c3dbdd46" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.632031 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1296c955-bd6d-4611-8bfa-abe4658610e1" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632037 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1296c955-bd6d-4611-8bfa-abe4658610e1" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.632057 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70186d10-7adc-40ec-b71b-0d9cd786d034" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632063 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="70186d10-7adc-40ec-b71b-0d9cd786d034" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.632075 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632081 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: E0204 09:01:44.632091 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd0b3ba-0734-4358-886a-28dfc62ff494" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632096 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd0b3ba-0734-4358-886a-28dfc62ff494" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632248 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="80419f02-9077-41fe-94d7-eed2c3dbdd46" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632263 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd0b3ba-0734-4358-886a-28dfc62ff494" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632283 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" containerName="mariadb-account-create-update" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632294 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e675941-cfe4-450e-87e8-8f0e0e68d7de" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632304 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="70186d10-7adc-40ec-b71b-0d9cd786d034" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632310 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1296c955-bd6d-4611-8bfa-abe4658610e1" containerName="mariadb-database-create" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.632948 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.638628 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hc5s" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.641802 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.644286 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.647336 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ng8rj"] Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.739080 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnf9\" (UniqueName: \"kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.739172 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.739233 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.739296 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.841582 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lnf9\" (UniqueName: \"kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.841625 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.841670 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.841699 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.848105 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.852166 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.863361 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lnf9\" (UniqueName: \"kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.867385 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data\") pod \"nova-cell0-conductor-db-sync-ng8rj\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:44 crc kubenswrapper[4644]: I0204 09:01:44.958869 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:01:45 crc kubenswrapper[4644]: I0204 09:01:45.433574 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ng8rj"] Feb 04 09:01:45 crc kubenswrapper[4644]: I0204 09:01:45.447148 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:01:46 crc kubenswrapper[4644]: I0204 09:01:46.418835 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" event={"ID":"a0880634-6912-4a8b-98b2-b18209a19896","Type":"ContainerStarted","Data":"61327a2888072ce0eafea7bbaf1209e89986f4e5a5c108fb2d6170b21964ce82"} Feb 04 09:01:50 crc kubenswrapper[4644]: I0204 09:01:50.556841 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:01:50 crc kubenswrapper[4644]: I0204 09:01:50.563356 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:01:52 crc kubenswrapper[4644]: I0204 09:01:52.597803 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:01:52 crc kubenswrapper[4644]: I0204 09:01:52.896168 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 04 09:01:52 crc kubenswrapper[4644]: I0204 09:01:52.969405 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-658bfcb544-88gj4" Feb 04 09:01:53 crc kubenswrapper[4644]: I0204 09:01:53.046442 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 09:01:53 crc kubenswrapper[4644]: I0204 09:01:53.493787 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon-log" containerID="cri-o://1dab15a1be86303be92c3759c78a00e496aa99453479fc6bf25b4717598d3ad4" gracePeriod=30 Feb 04 09:01:53 crc kubenswrapper[4644]: I0204 09:01:53.493924 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" containerID="cri-o://81f16fdc203970319ecb23e7416371815a2c6e54217d6c427b1e56240753ad09" gracePeriod=30 Feb 04 09:01:56 crc kubenswrapper[4644]: I0204 09:01:56.638250 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:33396->10.217.0.145:8443: read: connection reset by peer" Feb 04 09:01:57 crc kubenswrapper[4644]: I0204 09:01:57.549761 4644 generic.go:334] "Generic (PLEG): container finished" podID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerID="81f16fdc203970319ecb23e7416371815a2c6e54217d6c427b1e56240753ad09" exitCode=0 Feb 04 09:01:57 crc kubenswrapper[4644]: I0204 09:01:57.549917 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerDied","Data":"81f16fdc203970319ecb23e7416371815a2c6e54217d6c427b1e56240753ad09"} Feb 04 09:01:57 crc kubenswrapper[4644]: I0204 09:01:57.550092 4644 scope.go:117] "RemoveContainer" containerID="8902692ba02e033725e2a06b7322db9e6b6ebfef6c1d54196d590bb4f96705ea" Feb 04 09:01:58 crc kubenswrapper[4644]: E0204 09:01:58.060614 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 04 09:01:58 crc kubenswrapper[4644]: E0204 09:01:58.061041 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lnf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-ng8rj_openstack(a0880634-6912-4a8b-98b2-b18209a19896): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 09:01:58 crc kubenswrapper[4644]: E0204 09:01:58.062194 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" podUID="a0880634-6912-4a8b-98b2-b18209a19896" Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.267814 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-0a213c21f55f2ded15adc83e695a8d53dbb9ba01099c015be07c0e1f0a640ca1": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-0a213c21f55f2ded15adc83e695a8d53dbb9ba01099c015be07c0e1f0a640ca1: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.267898 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-conmon-90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-conmon-90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056.scope: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.267922 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef1c0ca_ffc0_495d_a4d1_da74c34137fe.slice/crio-90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056.scope: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.268037 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-4d5abe8173fcc3491cd3bfe3f5413b7210623851026d1de7977a5471802e867b": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-4d5abe8173fcc3491cd3bfe3f5413b7210623851026d1de7977a5471802e867b: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.274774 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-conmon-6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-conmon-6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b.scope: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: W0204 09:01:58.274899 4644 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd0b3ba_0734_4358_886a_28dfc62ff494.slice/crio-6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b.scope: no such file or directory Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.561872 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2831415-d88f-4c19-a272-4898fa142bd0" containerID="a86a224c6105fdf75148521d8aad7b3c7dd410b4673e5af950a65cab3e2385a6" exitCode=137 Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.562233 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerDied","Data":"a86a224c6105fdf75148521d8aad7b3c7dd410b4673e5af950a65cab3e2385a6"} Feb 04 09:01:58 crc kubenswrapper[4644]: E0204 09:01:58.594799 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" podUID="a0880634-6912-4a8b-98b2-b18209a19896" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.750212 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854279 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854352 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854412 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7whtf\" (UniqueName: \"kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854431 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854484 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854536 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854576 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle\") pod \"c2831415-d88f-4c19-a272-4898fa142bd0\" (UID: \"c2831415-d88f-4c19-a272-4898fa142bd0\") " Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.854806 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.855240 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.856106 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.868788 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts" (OuterVolumeSpecName: "scripts") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.881654 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf" (OuterVolumeSpecName: "kube-api-access-7whtf") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "kube-api-access-7whtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.935738 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.944573 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.956649 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.956680 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.956692 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.956701 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7whtf\" (UniqueName: \"kubernetes.io/projected/c2831415-d88f-4c19-a272-4898fa142bd0-kube-api-access-7whtf\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.956709 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2831415-d88f-4c19-a272-4898fa142bd0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:58 crc kubenswrapper[4644]: I0204 09:01:58.985194 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data" (OuterVolumeSpecName: "config-data") pod "c2831415-d88f-4c19-a272-4898fa142bd0" (UID: "c2831415-d88f-4c19-a272-4898fa142bd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.058708 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2831415-d88f-4c19-a272-4898fa142bd0-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.576441 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2831415-d88f-4c19-a272-4898fa142bd0","Type":"ContainerDied","Data":"b880fd69586b5a4897d8b5185a4eb8243e1965fa96ff6b81baac92fa597723ed"} Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.576497 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.576522 4644 scope.go:117] "RemoveContainer" containerID="a86a224c6105fdf75148521d8aad7b3c7dd410b4673e5af950a65cab3e2385a6" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.602379 4644 scope.go:117] "RemoveContainer" containerID="27a747ec948e5e971fb5e60065603d5cfd080c275a543d09ee5fb11f00fdff9c" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.651486 4644 scope.go:117] "RemoveContainer" containerID="a2dfe072daa067a30f643a86fe89f6eb328cfc0307a5f14750f0cbefe8e6b812" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.660922 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.698813 4644 scope.go:117] "RemoveContainer" containerID="18af029268efa27f781e3f91f3cc5729c59b0c00ed1000c22623524efb85f970" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.702728 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.713183 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:59 crc kubenswrapper[4644]: E0204 09:01:59.713747 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-notification-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.713817 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-notification-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: E0204 09:01:59.713851 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="sg-core" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.713858 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="sg-core" Feb 04 09:01:59 crc kubenswrapper[4644]: E0204 09:01:59.713898 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-central-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.713906 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-central-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: E0204 09:01:59.713919 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="proxy-httpd" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.713925 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="proxy-httpd" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.714199 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="proxy-httpd" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.714218 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-notification-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.727237 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="ceilometer-central-agent" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.727266 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" containerName="sg-core" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.730068 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.730183 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.732098 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.732200 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787390 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787436 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n6p\" (UniqueName: \"kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787483 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787540 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787566 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.787641 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889155 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889229 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889262 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n6p\" (UniqueName: \"kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889344 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889400 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889419 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.889442 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.890587 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.890749 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.897216 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.898557 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.898689 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.900065 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:01:59 crc kubenswrapper[4644]: I0204 09:01:59.917077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n6p\" (UniqueName: \"kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p\") pod \"ceilometer-0\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " pod="openstack/ceilometer-0" Feb 04 09:02:00 crc kubenswrapper[4644]: I0204 09:02:00.054569 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:02:00 crc kubenswrapper[4644]: I0204 09:02:00.608655 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:00 crc kubenswrapper[4644]: W0204 09:02:00.612443 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d8c5361_9774_4159_8a32_1fab71f080f1.slice/crio-b91d9f345e3361feddd80b0bbe3d260dc1f0065de7164235c5f1db66bfbd10ae WatchSource:0}: Error finding container b91d9f345e3361feddd80b0bbe3d260dc1f0065de7164235c5f1db66bfbd10ae: Status 404 returned error can't find the container with id b91d9f345e3361feddd80b0bbe3d260dc1f0065de7164235c5f1db66bfbd10ae Feb 04 09:02:00 crc kubenswrapper[4644]: I0204 09:02:00.670447 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2831415-d88f-4c19-a272-4898fa142bd0" path="/var/lib/kubelet/pods/c2831415-d88f-4c19-a272-4898fa142bd0/volumes" Feb 04 09:02:01 crc kubenswrapper[4644]: I0204 09:02:01.599205 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerStarted","Data":"b91d9f345e3361feddd80b0bbe3d260dc1f0065de7164235c5f1db66bfbd10ae"} Feb 04 09:02:03 crc kubenswrapper[4644]: I0204 09:02:03.618215 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerStarted","Data":"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d"} Feb 04 09:02:03 crc kubenswrapper[4644]: I0204 09:02:03.619534 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerStarted","Data":"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab"} Feb 04 09:02:04 crc kubenswrapper[4644]: I0204 09:02:04.633934 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerStarted","Data":"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873"} Feb 04 09:02:06 crc kubenswrapper[4644]: I0204 09:02:06.613948 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:02:06 crc kubenswrapper[4644]: I0204 09:02:06.655579 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerStarted","Data":"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264"} Feb 04 09:02:06 crc kubenswrapper[4644]: I0204 09:02:06.655768 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:02:10 crc kubenswrapper[4644]: I0204 09:02:10.701170 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.106750109 podStartE2EDuration="11.701143367s" podCreationTimestamp="2026-02-04 09:01:59 +0000 UTC" firstStartedPulling="2026-02-04 09:02:00.615952472 +0000 UTC m=+1230.656010237" lastFinishedPulling="2026-02-04 09:02:06.21034574 +0000 UTC m=+1236.250403495" observedRunningTime="2026-02-04 09:02:06.695745536 +0000 UTC m=+1236.735803291" watchObservedRunningTime="2026-02-04 09:02:10.701143367 +0000 UTC m=+1240.741201122" Feb 04 09:02:11 crc kubenswrapper[4644]: I0204 09:02:11.710259 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" event={"ID":"a0880634-6912-4a8b-98b2-b18209a19896","Type":"ContainerStarted","Data":"bcf7faab1bdbfb04f06591d5dd695e11804aa4517d2e069a4859ccebb4ecd31e"} Feb 04 09:02:11 crc kubenswrapper[4644]: I0204 09:02:11.739859 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" podStartSLOduration=1.935779776 podStartE2EDuration="27.73984075s" podCreationTimestamp="2026-02-04 09:01:44 +0000 UTC" firstStartedPulling="2026-02-04 09:01:45.446963667 +0000 UTC m=+1215.487021422" lastFinishedPulling="2026-02-04 09:02:11.251024641 +0000 UTC m=+1241.291082396" observedRunningTime="2026-02-04 09:02:11.737545358 +0000 UTC m=+1241.777603123" watchObservedRunningTime="2026-02-04 09:02:11.73984075 +0000 UTC m=+1241.779898505" Feb 04 09:02:16 crc kubenswrapper[4644]: I0204 09:02:16.613729 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb9db66f6-v84nx" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 04 09:02:16 crc kubenswrapper[4644]: I0204 09:02:16.614492 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.828057 4644 generic.go:334] "Generic (PLEG): container finished" podID="a0880634-6912-4a8b-98b2-b18209a19896" containerID="bcf7faab1bdbfb04f06591d5dd695e11804aa4517d2e069a4859ccebb4ecd31e" exitCode=0 Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.828158 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" event={"ID":"a0880634-6912-4a8b-98b2-b18209a19896","Type":"ContainerDied","Data":"bcf7faab1bdbfb04f06591d5dd695e11804aa4517d2e069a4859ccebb4ecd31e"} Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.840061 4644 generic.go:334] "Generic (PLEG): container finished" podID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerID="1dab15a1be86303be92c3759c78a00e496aa99453479fc6bf25b4717598d3ad4" exitCode=137 Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.840119 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerDied","Data":"1dab15a1be86303be92c3759c78a00e496aa99453479fc6bf25b4717598d3ad4"} Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.840154 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb9db66f6-v84nx" event={"ID":"46ca97c0-c6d7-4547-bb97-1d8b032c6297","Type":"ContainerDied","Data":"4c860d6835611dfb22c0185348590c08fcab4a1380876d4cb8754ad26f0c10fd"} Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.840171 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c860d6835611dfb22c0185348590c08fcab4a1380876d4cb8754ad26f0c10fd" Feb 04 09:02:23 crc kubenswrapper[4644]: I0204 09:02:23.893852 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.004916 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.005403 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcn9q\" (UniqueName: \"kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.005587 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.005822 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.006626 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs" (OuterVolumeSpecName: "logs") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.006923 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.007408 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.007585 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle\") pod \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\" (UID: \"46ca97c0-c6d7-4547-bb97-1d8b032c6297\") " Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.008690 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ca97c0-c6d7-4547-bb97-1d8b032c6297-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.010801 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q" (OuterVolumeSpecName: "kube-api-access-hcn9q") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "kube-api-access-hcn9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.011005 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.027548 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data" (OuterVolumeSpecName: "config-data") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.036512 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts" (OuterVolumeSpecName: "scripts") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.036798 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.063473 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "46ca97c0-c6d7-4547-bb97-1d8b032c6297" (UID: "46ca97c0-c6d7-4547-bb97-1d8b032c6297"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110394 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcn9q\" (UniqueName: \"kubernetes.io/projected/46ca97c0-c6d7-4547-bb97-1d8b032c6297-kube-api-access-hcn9q\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110438 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110451 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46ca97c0-c6d7-4547-bb97-1d8b032c6297-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110463 4644 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110475 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.110485 4644 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ca97c0-c6d7-4547-bb97-1d8b032c6297-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.870059 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb9db66f6-v84nx" Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.924161 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 09:02:24 crc kubenswrapper[4644]: I0204 09:02:24.951434 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fb9db66f6-v84nx"] Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.274545 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.333494 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle\") pod \"a0880634-6912-4a8b-98b2-b18209a19896\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.333597 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts\") pod \"a0880634-6912-4a8b-98b2-b18209a19896\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.333640 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data\") pod \"a0880634-6912-4a8b-98b2-b18209a19896\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.333683 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lnf9\" (UniqueName: \"kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9\") pod \"a0880634-6912-4a8b-98b2-b18209a19896\" (UID: \"a0880634-6912-4a8b-98b2-b18209a19896\") " Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.346046 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9" (OuterVolumeSpecName: "kube-api-access-4lnf9") pod "a0880634-6912-4a8b-98b2-b18209a19896" (UID: "a0880634-6912-4a8b-98b2-b18209a19896"). InnerVolumeSpecName "kube-api-access-4lnf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.346767 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts" (OuterVolumeSpecName: "scripts") pod "a0880634-6912-4a8b-98b2-b18209a19896" (UID: "a0880634-6912-4a8b-98b2-b18209a19896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.382603 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0880634-6912-4a8b-98b2-b18209a19896" (UID: "a0880634-6912-4a8b-98b2-b18209a19896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.383861 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data" (OuterVolumeSpecName: "config-data") pod "a0880634-6912-4a8b-98b2-b18209a19896" (UID: "a0880634-6912-4a8b-98b2-b18209a19896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.435362 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.435392 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lnf9\" (UniqueName: \"kubernetes.io/projected/a0880634-6912-4a8b-98b2-b18209a19896-kube-api-access-4lnf9\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.435405 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.435415 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0880634-6912-4a8b-98b2-b18209a19896-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.907296 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" event={"ID":"a0880634-6912-4a8b-98b2-b18209a19896","Type":"ContainerDied","Data":"61327a2888072ce0eafea7bbaf1209e89986f4e5a5c108fb2d6170b21964ce82"} Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.907358 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ng8rj" Feb 04 09:02:25 crc kubenswrapper[4644]: I0204 09:02:25.907369 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61327a2888072ce0eafea7bbaf1209e89986f4e5a5c108fb2d6170b21964ce82" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.039098 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 09:02:26 crc kubenswrapper[4644]: E0204 09:02:26.041549 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0880634-6912-4a8b-98b2-b18209a19896" containerName="nova-cell0-conductor-db-sync" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041572 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0880634-6912-4a8b-98b2-b18209a19896" containerName="nova-cell0-conductor-db-sync" Feb 04 09:02:26 crc kubenswrapper[4644]: E0204 09:02:26.041605 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041614 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: E0204 09:02:26.041637 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041646 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: E0204 09:02:26.041663 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon-log" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041671 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon-log" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041885 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041908 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0880634-6912-4a8b-98b2-b18209a19896" containerName="nova-cell0-conductor-db-sync" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041929 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.041944 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" containerName="horizon-log" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.042689 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.045141 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hc5s" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.045466 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.077797 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.152060 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.152120 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.152142 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwgd\" (UniqueName: \"kubernetes.io/projected/75651f7d-0816-4090-bcd8-0c20fd5660bd-kube-api-access-5jwgd\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.254494 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.254590 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.254632 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwgd\" (UniqueName: \"kubernetes.io/projected/75651f7d-0816-4090-bcd8-0c20fd5660bd-kube-api-access-5jwgd\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.265074 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.265098 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75651f7d-0816-4090-bcd8-0c20fd5660bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.282514 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwgd\" (UniqueName: \"kubernetes.io/projected/75651f7d-0816-4090-bcd8-0c20fd5660bd-kube-api-access-5jwgd\") pod \"nova-cell0-conductor-0\" (UID: \"75651f7d-0816-4090-bcd8-0c20fd5660bd\") " pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.375558 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.677620 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ca97c0-c6d7-4547-bb97-1d8b032c6297" path="/var/lib/kubelet/pods/46ca97c0-c6d7-4547-bb97-1d8b032c6297/volumes" Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.861856 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 09:02:26 crc kubenswrapper[4644]: W0204 09:02:26.861956 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75651f7d_0816_4090_bcd8_0c20fd5660bd.slice/crio-5c5321a0774ecd6e44bf3b62ebac22a85f01687c8094ae3004a6d6c7182c1291 WatchSource:0}: Error finding container 5c5321a0774ecd6e44bf3b62ebac22a85f01687c8094ae3004a6d6c7182c1291: Status 404 returned error can't find the container with id 5c5321a0774ecd6e44bf3b62ebac22a85f01687c8094ae3004a6d6c7182c1291 Feb 04 09:02:26 crc kubenswrapper[4644]: I0204 09:02:26.916091 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"75651f7d-0816-4090-bcd8-0c20fd5660bd","Type":"ContainerStarted","Data":"5c5321a0774ecd6e44bf3b62ebac22a85f01687c8094ae3004a6d6c7182c1291"} Feb 04 09:02:27 crc kubenswrapper[4644]: I0204 09:02:27.933948 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"75651f7d-0816-4090-bcd8-0c20fd5660bd","Type":"ContainerStarted","Data":"93e9fdc99b6ab82934dcb53b66c0a6249776b4116b2505c1289f59a102e03268"} Feb 04 09:02:27 crc kubenswrapper[4644]: I0204 09:02:27.934493 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:27 crc kubenswrapper[4644]: I0204 09:02:27.974815 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.974776942 podStartE2EDuration="1.974776942s" podCreationTimestamp="2026-02-04 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:27.959115717 +0000 UTC m=+1257.999173492" watchObservedRunningTime="2026-02-04 09:02:27.974776942 +0000 UTC m=+1258.014834747" Feb 04 09:02:30 crc kubenswrapper[4644]: I0204 09:02:30.062480 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 09:02:33 crc kubenswrapper[4644]: I0204 09:02:33.731762 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:33 crc kubenswrapper[4644]: I0204 09:02:33.732164 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="42467ee2-0414-443d-96c2-61b4118dd8d6" containerName="kube-state-metrics" containerID="cri-o://3132e04b42feeff5fc7d467cdf857222f933db6a8a61271a9cb3a25d685adc2f" gracePeriod=30 Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.004809 4644 generic.go:334] "Generic (PLEG): container finished" podID="42467ee2-0414-443d-96c2-61b4118dd8d6" containerID="3132e04b42feeff5fc7d467cdf857222f933db6a8a61271a9cb3a25d685adc2f" exitCode=2 Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.005060 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42467ee2-0414-443d-96c2-61b4118dd8d6","Type":"ContainerDied","Data":"3132e04b42feeff5fc7d467cdf857222f933db6a8a61271a9cb3a25d685adc2f"} Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.166946 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.202313 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmssd\" (UniqueName: \"kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd\") pod \"42467ee2-0414-443d-96c2-61b4118dd8d6\" (UID: \"42467ee2-0414-443d-96c2-61b4118dd8d6\") " Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.208299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd" (OuterVolumeSpecName: "kube-api-access-bmssd") pod "42467ee2-0414-443d-96c2-61b4118dd8d6" (UID: "42467ee2-0414-443d-96c2-61b4118dd8d6"). InnerVolumeSpecName "kube-api-access-bmssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:34 crc kubenswrapper[4644]: I0204 09:02:34.304770 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmssd\" (UniqueName: \"kubernetes.io/projected/42467ee2-0414-443d-96c2-61b4118dd8d6-kube-api-access-bmssd\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.016557 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42467ee2-0414-443d-96c2-61b4118dd8d6","Type":"ContainerDied","Data":"2e62b812c4d64c6f11006cdbeba2cb9c6d74dda8aaccbc1596bff5bcdab2fdeb"} Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.016612 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.016868 4644 scope.go:117] "RemoveContainer" containerID="3132e04b42feeff5fc7d467cdf857222f933db6a8a61271a9cb3a25d685adc2f" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.046442 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.055956 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.084742 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:35 crc kubenswrapper[4644]: E0204 09:02:35.085281 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42467ee2-0414-443d-96c2-61b4118dd8d6" containerName="kube-state-metrics" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.085391 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="42467ee2-0414-443d-96c2-61b4118dd8d6" containerName="kube-state-metrics" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.085641 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="42467ee2-0414-443d-96c2-61b4118dd8d6" containerName="kube-state-metrics" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.086266 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.093062 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.093367 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.108169 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.145391 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzgm\" (UniqueName: \"kubernetes.io/projected/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-api-access-pwzgm\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.145441 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.145513 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.145915 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.252638 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.252732 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzgm\" (UniqueName: \"kubernetes.io/projected/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-api-access-pwzgm\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.252760 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.252790 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.263273 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.263558 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.269104 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.278748 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzgm\" (UniqueName: \"kubernetes.io/projected/26059c78-ccf4-418d-9012-40eb6cc5ba6f-kube-api-access-pwzgm\") pod \"kube-state-metrics-0\" (UID: \"26059c78-ccf4-418d-9012-40eb6cc5ba6f\") " pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.450412 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.628482 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.629176 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-central-agent" containerID="cri-o://612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" gracePeriod=30 Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.629305 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-notification-agent" containerID="cri-o://4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" gracePeriod=30 Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.629449 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="sg-core" containerID="cri-o://c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" gracePeriod=30 Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.629240 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="proxy-httpd" containerID="cri-o://7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" gracePeriod=30 Feb 04 09:02:35 crc kubenswrapper[4644]: I0204 09:02:35.913633 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.026349 4644 generic.go:334] "Generic (PLEG): container finished" podID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerID="7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" exitCode=0 Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.027279 4644 generic.go:334] "Generic (PLEG): container finished" podID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerID="c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" exitCode=2 Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.026374 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerDied","Data":"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264"} Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.027609 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerDied","Data":"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873"} Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.029415 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26059c78-ccf4-418d-9012-40eb6cc5ba6f","Type":"ContainerStarted","Data":"795f31f490d89167fb3db90db1e1642e4d5c834f6987576e9f05d518c6d637c5"} Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.431623 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.599375 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.672854 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42467ee2-0414-443d-96c2-61b4118dd8d6" path="/var/lib/kubelet/pods/42467ee2-0414-443d-96c2-61b4118dd8d6/volumes" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711179 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711265 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711365 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711431 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711494 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711572 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7n6p\" (UniqueName: \"kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.711604 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle\") pod \"5d8c5361-9774-4159-8a32-1fab71f080f1\" (UID: \"5d8c5361-9774-4159-8a32-1fab71f080f1\") " Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.713786 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.713953 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.723441 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts" (OuterVolumeSpecName: "scripts") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.726504 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p" (OuterVolumeSpecName: "kube-api-access-h7n6p") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "kube-api-access-h7n6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.750069 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.813512 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.813546 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.813556 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7n6p\" (UniqueName: \"kubernetes.io/projected/5d8c5361-9774-4159-8a32-1fab71f080f1-kube-api-access-h7n6p\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.813566 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8c5361-9774-4159-8a32-1fab71f080f1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.813576 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.819363 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.845220 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data" (OuterVolumeSpecName: "config-data") pod "5d8c5361-9774-4159-8a32-1fab71f080f1" (UID: "5d8c5361-9774-4159-8a32-1fab71f080f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.916019 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:36 crc kubenswrapper[4644]: I0204 09:02:36.916054 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8c5361-9774-4159-8a32-1fab71f080f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.031412 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k2hj8"] Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.032368 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-notification-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032390 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-notification-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.032403 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-central-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032413 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-central-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.032436 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="proxy-httpd" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032446 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="proxy-httpd" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.032477 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="sg-core" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032487 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="sg-core" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032750 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-central-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032789 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="sg-core" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032803 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="proxy-httpd" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.032823 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerName="ceilometer-notification-agent" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.033637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.036462 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.041171 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.045724 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26059c78-ccf4-418d-9012-40eb6cc5ba6f","Type":"ContainerStarted","Data":"a76a6f3768da12122829d18fe8fd178705004e5b3145eae5f10c540a2bcab595"} Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.046528 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.049943 4644 generic.go:334] "Generic (PLEG): container finished" podID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerID="4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" exitCode=0 Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050092 4644 generic.go:334] "Generic (PLEG): container finished" podID="5d8c5361-9774-4159-8a32-1fab71f080f1" containerID="612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" exitCode=0 Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050203 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerDied","Data":"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d"} Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050299 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerDied","Data":"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab"} Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050474 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8c5361-9774-4159-8a32-1fab71f080f1","Type":"ContainerDied","Data":"b91d9f345e3361feddd80b0bbe3d260dc1f0065de7164235c5f1db66bfbd10ae"} Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050577 4644 scope.go:117] "RemoveContainer" containerID="7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.050896 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.057691 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2hj8"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.078258 4644 scope.go:117] "RemoveContainer" containerID="c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.109665 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.662956298 podStartE2EDuration="2.109648043s" podCreationTimestamp="2026-02-04 09:02:35 +0000 UTC" firstStartedPulling="2026-02-04 09:02:35.930318675 +0000 UTC m=+1265.970376430" lastFinishedPulling="2026-02-04 09:02:36.37701042 +0000 UTC m=+1266.417068175" observedRunningTime="2026-02-04 09:02:37.106693122 +0000 UTC m=+1267.146750887" watchObservedRunningTime="2026-02-04 09:02:37.109648043 +0000 UTC m=+1267.149705798" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.118797 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbcl\" (UniqueName: \"kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.118870 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.118923 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.118982 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.132755 4644 scope.go:117] "RemoveContainer" containerID="4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.182912 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.209967 4644 scope.go:117] "RemoveContainer" containerID="612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.210118 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.226205 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbcl\" (UniqueName: \"kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.226560 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.226667 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.226786 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.234274 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.237922 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.239507 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.269435 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.270950 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.276567 4644 scope.go:117] "RemoveContainer" containerID="7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.277781 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.278927 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264\": container with ID starting with 7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264 not found: ID does not exist" containerID="7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.278960 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264"} err="failed to get container status \"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264\": rpc error: code = NotFound desc = could not find container \"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264\": container with ID starting with 7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264 not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.278981 4644 scope.go:117] "RemoveContainer" containerID="c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.279214 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873\": container with ID starting with c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873 not found: ID does not exist" containerID="c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.279236 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873"} err="failed to get container status \"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873\": rpc error: code = NotFound desc = could not find container \"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873\": container with ID starting with c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873 not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.279247 4644 scope.go:117] "RemoveContainer" containerID="4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.280235 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.280408 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d\": container with ID starting with 4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d not found: ID does not exist" containerID="4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.280428 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d"} err="failed to get container status \"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d\": rpc error: code = NotFound desc = could not find container \"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d\": container with ID starting with 4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.280442 4644 scope.go:117] "RemoveContainer" containerID="612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" Feb 04 09:02:37 crc kubenswrapper[4644]: E0204 09:02:37.280735 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab\": container with ID starting with 612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab not found: ID does not exist" containerID="612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.280757 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab"} err="failed to get container status \"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab\": rpc error: code = NotFound desc = could not find container \"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab\": container with ID starting with 612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.280769 4644 scope.go:117] "RemoveContainer" containerID="7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.291702 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264"} err="failed to get container status \"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264\": rpc error: code = NotFound desc = could not find container \"7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264\": container with ID starting with 7356753e31617bbef8375571edb90900f5ead0a5397b6e89afc568bbb2d54264 not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.291741 4644 scope.go:117] "RemoveContainer" containerID="c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.295148 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873"} err="failed to get container status \"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873\": rpc error: code = NotFound desc = could not find container \"c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873\": container with ID starting with c39d982fdec78dd1d5f066a14f99bc1aa1459b9201b4b664f685f099f5b36873 not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.295187 4644 scope.go:117] "RemoveContainer" containerID="4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.295564 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d"} err="failed to get container status \"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d\": rpc error: code = NotFound desc = could not find container \"4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d\": container with ID starting with 4e18714905ad76d47acde53d4f583637f2ae34b979a10338e54ea016a958bf6d not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.295596 4644 scope.go:117] "RemoveContainer" containerID="612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.295835 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab"} err="failed to get container status \"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab\": rpc error: code = NotFound desc = could not find container \"612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab\": container with ID starting with 612af179491d2aa115665c0bc26d7c4b042622558023c5cf8320c76bf80c2aab not found: ID does not exist" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.306354 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.310800 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.310896 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.323631 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.323846 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.329842 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.331267 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.333133 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.339823 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbcl\" (UniqueName: \"kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl\") pod \"nova-cell0-cell-mapping-k2hj8\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.352448 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.355858 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.387985 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443721 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443780 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443821 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4pw\" (UniqueName: \"kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443848 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443919 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443938 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.443972 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444001 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444022 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444043 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444099 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444121 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99nl\" (UniqueName: \"kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444155 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2g7\" (UniqueName: \"kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.444183 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546081 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546154 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99nl\" (UniqueName: \"kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546187 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2g7\" (UniqueName: \"kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546214 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546268 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546291 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546319 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4pw\" (UniqueName: \"kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546354 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546396 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546415 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546445 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546469 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546491 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.546509 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.548639 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.548867 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.599468 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.599552 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.607702 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.622985 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.626196 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99nl\" (UniqueName: \"kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.626761 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.627345 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.627850 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.628707 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.629191 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4pw\" (UniqueName: \"kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw\") pod \"ceilometer-0\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.629590 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.636948 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2g7\" (UniqueName: \"kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7\") pod \"nova-scheduler-0\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.645567 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.647191 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.660256 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.679388 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.695246 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.697970 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.705896 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.713619 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.732123 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.758355 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9lq\" (UniqueName: \"kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.758983 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.779149 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.779830 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.830907 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.842233 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.845192 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.846887 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884723 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884807 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g9lq\" (UniqueName: \"kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884824 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884866 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhgx\" (UniqueName: \"kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884907 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884936 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.884999 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.885032 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.889555 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.890254 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.922164 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.922863 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.933248 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g9lq\" (UniqueName: \"kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq\") pod \"nova-api-0\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " pod="openstack/nova-api-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986560 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986662 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhgx\" (UniqueName: \"kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986709 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986740 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986789 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986838 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx5f\" (UniqueName: \"kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986897 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986938 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.986966 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.987498 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:37 crc kubenswrapper[4644]: I0204 09:02:37.991314 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.000623 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.005783 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhgx\" (UniqueName: \"kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx\") pod \"nova-metadata-0\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.033103 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.075772 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088279 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088344 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpx5f\" (UniqueName: \"kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088393 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088430 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088488 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.088524 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.089203 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.089218 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.089746 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.089947 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.090118 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.116003 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpx5f\" (UniqueName: \"kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f\") pod \"dnsmasq-dns-bccf8f775-fzd7f\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.200295 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2hj8"] Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.223587 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.584460 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:02:38 crc kubenswrapper[4644]: W0204 09:02:38.602718 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287914be_51e2_4982_a7e0_9e3cc4bfc1aa.slice/crio-7374a9cf51e287a17e0bbf29de6a33d61aa8f6c0d24e353eed0f45e7eff603d0 WatchSource:0}: Error finding container 7374a9cf51e287a17e0bbf29de6a33d61aa8f6c0d24e353eed0f45e7eff603d0: Status 404 returned error can't find the container with id 7374a9cf51e287a17e0bbf29de6a33d61aa8f6c0d24e353eed0f45e7eff603d0 Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.691569 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8c5361-9774-4159-8a32-1fab71f080f1" path="/var/lib/kubelet/pods/5d8c5361-9774-4159-8a32-1fab71f080f1/volumes" Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.787832 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.930031 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:38 crc kubenswrapper[4644]: W0204 09:02:38.963986 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b69c38_8b1b_4419_9864_cd4b311d09c5.slice/crio-84dc00d70d496140434629e4e2348a5cd2d0c3bea0e7ef35707ba3062d351a62 WatchSource:0}: Error finding container 84dc00d70d496140434629e4e2348a5cd2d0c3bea0e7ef35707ba3062d351a62: Status 404 returned error can't find the container with id 84dc00d70d496140434629e4e2348a5cd2d0c3bea0e7ef35707ba3062d351a62 Feb 04 09:02:38 crc kubenswrapper[4644]: I0204 09:02:38.965676 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.151652 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerStarted","Data":"84dc00d70d496140434629e4e2348a5cd2d0c3bea0e7ef35707ba3062d351a62"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.153603 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerStarted","Data":"9f9f13be6f817cfc6117f641ee0d76b0c4072ff0ad943c983099188e265a135d"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.154549 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12","Type":"ContainerStarted","Data":"d4748b16aaad17f67060a2cc304eb1e2bc289a5595ab70109abe7e6a316d762f"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.155308 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"287914be-51e2-4982-a7e0-9e3cc4bfc1aa","Type":"ContainerStarted","Data":"7374a9cf51e287a17e0bbf29de6a33d61aa8f6c0d24e353eed0f45e7eff603d0"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.156357 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2hj8" event={"ID":"13bbfe4b-94f7-4d67-9486-84fe0d0148a7","Type":"ContainerStarted","Data":"bbb89581a7f440c7bb50098d9840fadf2d2ab7492b02b8d833e0634207094cbd"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.156379 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2hj8" event={"ID":"13bbfe4b-94f7-4d67-9486-84fe0d0148a7","Type":"ContainerStarted","Data":"6ae25807eccc314cd2b340883dc48379540db53a6c7d6b317477c9109eff0fb8"} Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.174744 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k2hj8" podStartSLOduration=2.174724935 podStartE2EDuration="2.174724935s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:39.172462693 +0000 UTC m=+1269.212520448" watchObservedRunningTime="2026-02-04 09:02:39.174724935 +0000 UTC m=+1269.214782690" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.230081 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.248672 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:02:39 crc kubenswrapper[4644]: W0204 09:02:39.254561 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51698a4f_9e64_41ea_9130_c197b4505acb.slice/crio-2c932d37126904e7e4d8076176e5ff61dc17f88daa6a7266702b7a5521da2219 WatchSource:0}: Error finding container 2c932d37126904e7e4d8076176e5ff61dc17f88daa6a7266702b7a5521da2219: Status 404 returned error can't find the container with id 2c932d37126904e7e4d8076176e5ff61dc17f88daa6a7266702b7a5521da2219 Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.647877 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7bsw6"] Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.649384 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.651732 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.651894 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.672079 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7bsw6"] Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.757408 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26g9\" (UniqueName: \"kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.757493 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.757602 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.757626 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.858915 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.859131 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.859158 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.859227 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26g9\" (UniqueName: \"kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.866050 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.869130 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.870598 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:39 crc kubenswrapper[4644]: I0204 09:02:39.888692 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26g9\" (UniqueName: \"kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9\") pod \"nova-cell1-conductor-db-sync-7bsw6\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.022920 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.189674 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerStarted","Data":"5e8847ba5eed4d6cbee509047c05637388b9dc8203e8c00b603db04de4a1e5c6"} Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.196193 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerStarted","Data":"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8"} Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.201195 4644 generic.go:334] "Generic (PLEG): container finished" podID="51698a4f-9e64-41ea-9130-c197b4505acb" containerID="eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200" exitCode=0 Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.201263 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" event={"ID":"51698a4f-9e64-41ea-9130-c197b4505acb","Type":"ContainerDied","Data":"eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200"} Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.201353 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" event={"ID":"51698a4f-9e64-41ea-9130-c197b4505acb","Type":"ContainerStarted","Data":"2c932d37126904e7e4d8076176e5ff61dc17f88daa6a7266702b7a5521da2219"} Feb 04 09:02:40 crc kubenswrapper[4644]: I0204 09:02:40.594762 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7bsw6"] Feb 04 09:02:40 crc kubenswrapper[4644]: W0204 09:02:40.617860 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a5663f_2ce9_417b_a359_5db9a580628b.slice/crio-78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918 WatchSource:0}: Error finding container 78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918: Status 404 returned error can't find the container with id 78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918 Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.249544 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerStarted","Data":"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb"} Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.315394 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" event={"ID":"51698a4f-9e64-41ea-9130-c197b4505acb","Type":"ContainerStarted","Data":"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc"} Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.315875 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.327132 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" event={"ID":"e4a5663f-2ce9-417b-a359-5db9a580628b","Type":"ContainerStarted","Data":"75705b14068c1ae7b352ef3819097dadf96bb7380f42402734539b03018e6b4b"} Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.327181 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" event={"ID":"e4a5663f-2ce9-417b-a359-5db9a580628b","Type":"ContainerStarted","Data":"78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918"} Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.378784 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" podStartSLOduration=4.378762426 podStartE2EDuration="4.378762426s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:41.374528901 +0000 UTC m=+1271.414586656" watchObservedRunningTime="2026-02-04 09:02:41.378762426 +0000 UTC m=+1271.418820181" Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.473040 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" podStartSLOduration=2.473018062 podStartE2EDuration="2.473018062s" podCreationTimestamp="2026-02-04 09:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:41.458731965 +0000 UTC m=+1271.498789720" watchObservedRunningTime="2026-02-04 09:02:41.473018062 +0000 UTC m=+1271.513075817" Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.513586 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:02:41 crc kubenswrapper[4644]: I0204 09:02:41.546853 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:45 crc kubenswrapper[4644]: I0204 09:02:45.369910 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerStarted","Data":"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd"} Feb 04 09:02:45 crc kubenswrapper[4644]: I0204 09:02:45.579573 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.396124 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerStarted","Data":"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.397269 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerStarted","Data":"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.396992 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-metadata" containerID="cri-o://edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" gracePeriod=30 Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.396517 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-log" containerID="cri-o://b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" gracePeriod=30 Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.422506 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerStarted","Data":"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.422550 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerStarted","Data":"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.426179 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.7122587769999997 podStartE2EDuration="9.426150299s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="2026-02-04 09:02:38.981854283 +0000 UTC m=+1269.021912038" lastFinishedPulling="2026-02-04 09:02:44.695745795 +0000 UTC m=+1274.735803560" observedRunningTime="2026-02-04 09:02:46.415047168 +0000 UTC m=+1276.455104923" watchObservedRunningTime="2026-02-04 09:02:46.426150299 +0000 UTC m=+1276.466208054" Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.433935 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12","Type":"ContainerStarted","Data":"c1b0b95035b014da142748632628ad35524250dcea068d7091bb757864c2d8e3"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.442385 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"287914be-51e2-4982-a7e0-9e3cc4bfc1aa","Type":"ContainerStarted","Data":"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb"} Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.442497 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb" gracePeriod=30 Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.461839 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.016915731 podStartE2EDuration="9.461818587s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="2026-02-04 09:02:39.249514283 +0000 UTC m=+1269.289572038" lastFinishedPulling="2026-02-04 09:02:44.694417119 +0000 UTC m=+1274.734474894" observedRunningTime="2026-02-04 09:02:46.45526764 +0000 UTC m=+1276.495325395" watchObservedRunningTime="2026-02-04 09:02:46.461818587 +0000 UTC m=+1276.501876342" Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.486019 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.799158936 podStartE2EDuration="9.485997413s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="2026-02-04 09:02:39.004510488 +0000 UTC m=+1269.044568243" lastFinishedPulling="2026-02-04 09:02:44.691348975 +0000 UTC m=+1274.731406720" observedRunningTime="2026-02-04 09:02:46.477303738 +0000 UTC m=+1276.517361503" watchObservedRunningTime="2026-02-04 09:02:46.485997413 +0000 UTC m=+1276.526055168" Feb 04 09:02:46 crc kubenswrapper[4644]: I0204 09:02:46.499320 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.420765132 podStartE2EDuration="9.499302414s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="2026-02-04 09:02:38.621506479 +0000 UTC m=+1268.661564234" lastFinishedPulling="2026-02-04 09:02:44.700043761 +0000 UTC m=+1274.740101516" observedRunningTime="2026-02-04 09:02:46.498517462 +0000 UTC m=+1276.538575217" watchObservedRunningTime="2026-02-04 09:02:46.499302414 +0000 UTC m=+1276.539360159" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.342557 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.430109 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data\") pod \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.431096 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs\") pod \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.431214 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle\") pod \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.431356 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhgx\" (UniqueName: \"kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx\") pod \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\" (UID: \"b6b69c38-8b1b-4419-9864-cd4b311d09c5\") " Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.432472 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs" (OuterVolumeSpecName: "logs") pod "b6b69c38-8b1b-4419-9864-cd4b311d09c5" (UID: "b6b69c38-8b1b-4419-9864-cd4b311d09c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.452587 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx" (OuterVolumeSpecName: "kube-api-access-mxhgx") pod "b6b69c38-8b1b-4419-9864-cd4b311d09c5" (UID: "b6b69c38-8b1b-4419-9864-cd4b311d09c5"). InnerVolumeSpecName "kube-api-access-mxhgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.478968 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6b69c38-8b1b-4419-9864-cd4b311d09c5" (UID: "b6b69c38-8b1b-4419-9864-cd4b311d09c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.497560 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data" (OuterVolumeSpecName: "config-data") pod "b6b69c38-8b1b-4419-9864-cd4b311d09c5" (UID: "b6b69c38-8b1b-4419-9864-cd4b311d09c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.510119 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerID="edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" exitCode=0 Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.511675 4644 generic.go:334] "Generic (PLEG): container finished" podID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerID="b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" exitCode=143 Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.512600 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.515234 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerDied","Data":"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c"} Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.515276 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerDied","Data":"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f"} Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.515285 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6b69c38-8b1b-4419-9864-cd4b311d09c5","Type":"ContainerDied","Data":"84dc00d70d496140434629e4e2348a5cd2d0c3bea0e7ef35707ba3062d351a62"} Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.515302 4644 scope.go:117] "RemoveContainer" containerID="edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.533519 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhgx\" (UniqueName: \"kubernetes.io/projected/b6b69c38-8b1b-4419-9864-cd4b311d09c5-kube-api-access-mxhgx\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.533547 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.533558 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6b69c38-8b1b-4419-9864-cd4b311d09c5-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.533569 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b69c38-8b1b-4419-9864-cd4b311d09c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.568507 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.570315 4644 scope.go:117] "RemoveContainer" containerID="b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.585377 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.599739 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:47 crc kubenswrapper[4644]: E0204 09:02:47.600172 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-metadata" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.600187 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-metadata" Feb 04 09:02:47 crc kubenswrapper[4644]: E0204 09:02:47.600203 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-log" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.600210 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-log" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.600391 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-log" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.600416 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" containerName="nova-metadata-metadata" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.601358 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.606992 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.607380 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.613482 4644 scope.go:117] "RemoveContainer" containerID="edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" Feb 04 09:02:47 crc kubenswrapper[4644]: E0204 09:02:47.615800 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c\": container with ID starting with edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c not found: ID does not exist" containerID="edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.615927 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c"} err="failed to get container status \"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c\": rpc error: code = NotFound desc = could not find container \"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c\": container with ID starting with edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c not found: ID does not exist" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.616063 4644 scope.go:117] "RemoveContainer" containerID="b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" Feb 04 09:02:47 crc kubenswrapper[4644]: E0204 09:02:47.616676 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f\": container with ID starting with b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f not found: ID does not exist" containerID="b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.616760 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f"} err="failed to get container status \"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f\": rpc error: code = NotFound desc = could not find container \"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f\": container with ID starting with b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f not found: ID does not exist" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.616823 4644 scope.go:117] "RemoveContainer" containerID="edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.617202 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c"} err="failed to get container status \"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c\": rpc error: code = NotFound desc = could not find container \"edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c\": container with ID starting with edab7f4320a1ac8026540bb649b75f33973306059ad9b22c30c0df46e7ea1f9c not found: ID does not exist" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.617276 4644 scope.go:117] "RemoveContainer" containerID="b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.617716 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f"} err="failed to get container status \"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f\": rpc error: code = NotFound desc = could not find container \"b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f\": container with ID starting with b63141e06348036e166b786815a9a8ebe7db185f50f4544336295e11dc30df0f not found: ID does not exist" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.622416 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.714684 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.736292 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.736465 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.736530 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdc7\" (UniqueName: \"kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.736581 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.736897 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.832201 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.832282 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.838295 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.838368 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.838448 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.838493 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdc7\" (UniqueName: \"kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.838528 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.846371 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.847118 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.851367 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.851941 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.884393 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdc7\" (UniqueName: \"kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7\") pod \"nova-metadata-0\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " pod="openstack/nova-metadata-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.891986 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 04 09:02:47 crc kubenswrapper[4644]: I0204 09:02:47.934452 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.033956 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.034004 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.229532 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.325021 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.325523 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="dnsmasq-dns" containerID="cri-o://b34535d9f9bd2e647c3dc0a580e87c2ecda51eadecc75c88bf1439c33ce300ff" gracePeriod=10 Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.340756 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.544101 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerStarted","Data":"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f"} Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.544740 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.553178 4644 generic.go:334] "Generic (PLEG): container finished" podID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerID="b34535d9f9bd2e647c3dc0a580e87c2ecda51eadecc75c88bf1439c33ce300ff" exitCode=0 Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.553539 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" event={"ID":"c84d00ec-439f-4e2a-8c88-290eb2a194ac","Type":"ContainerDied","Data":"b34535d9f9bd2e647c3dc0a580e87c2ecda51eadecc75c88bf1439c33ce300ff"} Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.595779 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.098928842 podStartE2EDuration="11.595752657s" podCreationTimestamp="2026-02-04 09:02:37 +0000 UTC" firstStartedPulling="2026-02-04 09:02:38.884086031 +0000 UTC m=+1268.924143786" lastFinishedPulling="2026-02-04 09:02:47.380909856 +0000 UTC m=+1277.420967601" observedRunningTime="2026-02-04 09:02:48.587623667 +0000 UTC m=+1278.627681422" watchObservedRunningTime="2026-02-04 09:02:48.595752657 +0000 UTC m=+1278.635810412" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.737752 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b69c38-8b1b-4419-9864-cd4b311d09c5" path="/var/lib/kubelet/pods/b6b69c38-8b1b-4419-9864-cd4b311d09c5/volumes" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.743842 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.743999 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.868632 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.973923 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.974051 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.974089 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvv4v\" (UniqueName: \"kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.974174 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.974233 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:48 crc kubenswrapper[4644]: I0204 09:02:48.974269 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.053055 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v" (OuterVolumeSpecName: "kube-api-access-hvv4v") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "kube-api-access-hvv4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.076672 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvv4v\" (UniqueName: \"kubernetes.io/projected/c84d00ec-439f-4e2a-8c88-290eb2a194ac-kube-api-access-hvv4v\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.132437 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.132437 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.169054 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config" (OuterVolumeSpecName: "config") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.169094 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.184803 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.185149 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") pod \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\" (UID: \"c84d00ec-439f-4e2a-8c88-290eb2a194ac\") " Feb 04 09:02:49 crc kubenswrapper[4644]: W0204 09:02:49.186348 4644 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c84d00ec-439f-4e2a-8c88-290eb2a194ac/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.186443 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.186661 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.188751 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.188780 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.188789 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.188798 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.200137 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c84d00ec-439f-4e2a-8c88-290eb2a194ac" (UID: "c84d00ec-439f-4e2a-8c88-290eb2a194ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.290347 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c84d00ec-439f-4e2a-8c88-290eb2a194ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.568832 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" event={"ID":"c84d00ec-439f-4e2a-8c88-290eb2a194ac","Type":"ContainerDied","Data":"f0d831ac5289d4637391c0f5dad009d686a2794ca2820c983f6b72772519f0d0"} Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.569088 4644 scope.go:117] "RemoveContainer" containerID="b34535d9f9bd2e647c3dc0a580e87c2ecda51eadecc75c88bf1439c33ce300ff" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.569202 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mtv2p" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.592076 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerStarted","Data":"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd"} Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.592122 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerStarted","Data":"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154"} Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.592133 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerStarted","Data":"89535e287b62ecbee81a1423103a3ff2459a075b0be80016cd52f27f3246979c"} Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.611633 4644 scope.go:117] "RemoveContainer" containerID="2bcc80d100e35eb6434d8ba35d000a38a8e6c29dead3b1bac14c9778d67456a7" Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.616774 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.640393 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mtv2p"] Feb 04 09:02:49 crc kubenswrapper[4644]: I0204 09:02:49.662981 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.662965094 podStartE2EDuration="2.662965094s" podCreationTimestamp="2026-02-04 09:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:49.628619043 +0000 UTC m=+1279.668676798" watchObservedRunningTime="2026-02-04 09:02:49.662965094 +0000 UTC m=+1279.703022849" Feb 04 09:02:50 crc kubenswrapper[4644]: I0204 09:02:50.669869 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" path="/var/lib/kubelet/pods/c84d00ec-439f-4e2a-8c88-290eb2a194ac/volumes" Feb 04 09:02:51 crc kubenswrapper[4644]: I0204 09:02:51.619847 4644 generic.go:334] "Generic (PLEG): container finished" podID="13bbfe4b-94f7-4d67-9486-84fe0d0148a7" containerID="bbb89581a7f440c7bb50098d9840fadf2d2ab7492b02b8d833e0634207094cbd" exitCode=0 Feb 04 09:02:51 crc kubenswrapper[4644]: I0204 09:02:51.619920 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2hj8" event={"ID":"13bbfe4b-94f7-4d67-9486-84fe0d0148a7","Type":"ContainerDied","Data":"bbb89581a7f440c7bb50098d9840fadf2d2ab7492b02b8d833e0634207094cbd"} Feb 04 09:02:52 crc kubenswrapper[4644]: I0204 09:02:52.934707 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:02:52 crc kubenswrapper[4644]: I0204 09:02:52.935676 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.056159 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.162387 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlbcl\" (UniqueName: \"kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl\") pod \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.162861 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data\") pod \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.162925 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle\") pod \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.163033 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts\") pod \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\" (UID: \"13bbfe4b-94f7-4d67-9486-84fe0d0148a7\") " Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.169502 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl" (OuterVolumeSpecName: "kube-api-access-xlbcl") pod "13bbfe4b-94f7-4d67-9486-84fe0d0148a7" (UID: "13bbfe4b-94f7-4d67-9486-84fe0d0148a7"). InnerVolumeSpecName "kube-api-access-xlbcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.171157 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts" (OuterVolumeSpecName: "scripts") pod "13bbfe4b-94f7-4d67-9486-84fe0d0148a7" (UID: "13bbfe4b-94f7-4d67-9486-84fe0d0148a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.192069 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data" (OuterVolumeSpecName: "config-data") pod "13bbfe4b-94f7-4d67-9486-84fe0d0148a7" (UID: "13bbfe4b-94f7-4d67-9486-84fe0d0148a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.205710 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13bbfe4b-94f7-4d67-9486-84fe0d0148a7" (UID: "13bbfe4b-94f7-4d67-9486-84fe0d0148a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.265853 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.265894 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.265910 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.265923 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlbcl\" (UniqueName: \"kubernetes.io/projected/13bbfe4b-94f7-4d67-9486-84fe0d0148a7-kube-api-access-xlbcl\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.640646 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2hj8" event={"ID":"13bbfe4b-94f7-4d67-9486-84fe0d0148a7","Type":"ContainerDied","Data":"6ae25807eccc314cd2b340883dc48379540db53a6c7d6b317477c9109eff0fb8"} Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.640910 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae25807eccc314cd2b340883dc48379540db53a6c7d6b317477c9109eff0fb8" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.641092 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2hj8" Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.643352 4644 generic.go:334] "Generic (PLEG): container finished" podID="e4a5663f-2ce9-417b-a359-5db9a580628b" containerID="75705b14068c1ae7b352ef3819097dadf96bb7380f42402734539b03018e6b4b" exitCode=0 Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.643437 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" event={"ID":"e4a5663f-2ce9-417b-a359-5db9a580628b","Type":"ContainerDied","Data":"75705b14068c1ae7b352ef3819097dadf96bb7380f42402734539b03018e6b4b"} Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.855717 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.856300 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" containerName="nova-scheduler-scheduler" containerID="cri-o://c1b0b95035b014da142748632628ad35524250dcea068d7091bb757864c2d8e3" gracePeriod=30 Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.887581 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.888208 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-log" containerID="cri-o://2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0" gracePeriod=30 Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.888987 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-api" containerID="cri-o://0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c" gracePeriod=30 Feb 04 09:02:53 crc kubenswrapper[4644]: I0204 09:02:53.903138 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:54 crc kubenswrapper[4644]: I0204 09:02:54.655847 4644 generic.go:334] "Generic (PLEG): container finished" podID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerID="2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0" exitCode=143 Feb 04 09:02:54 crc kubenswrapper[4644]: I0204 09:02:54.655928 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerDied","Data":"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0"} Feb 04 09:02:54 crc kubenswrapper[4644]: I0204 09:02:54.656064 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-log" containerID="cri-o://1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" gracePeriod=30 Feb 04 09:02:54 crc kubenswrapper[4644]: I0204 09:02:54.656166 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-metadata" containerID="cri-o://aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" gracePeriod=30 Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.113498 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.206349 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts\") pod \"e4a5663f-2ce9-417b-a359-5db9a580628b\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.206468 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k26g9\" (UniqueName: \"kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9\") pod \"e4a5663f-2ce9-417b-a359-5db9a580628b\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.206533 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data\") pod \"e4a5663f-2ce9-417b-a359-5db9a580628b\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.206703 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle\") pod \"e4a5663f-2ce9-417b-a359-5db9a580628b\" (UID: \"e4a5663f-2ce9-417b-a359-5db9a580628b\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.212431 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9" (OuterVolumeSpecName: "kube-api-access-k26g9") pod "e4a5663f-2ce9-417b-a359-5db9a580628b" (UID: "e4a5663f-2ce9-417b-a359-5db9a580628b"). InnerVolumeSpecName "kube-api-access-k26g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.212443 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts" (OuterVolumeSpecName: "scripts") pod "e4a5663f-2ce9-417b-a359-5db9a580628b" (UID: "e4a5663f-2ce9-417b-a359-5db9a580628b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.242596 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4a5663f-2ce9-417b-a359-5db9a580628b" (UID: "e4a5663f-2ce9-417b-a359-5db9a580628b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.272418 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data" (OuterVolumeSpecName: "config-data") pod "e4a5663f-2ce9-417b-a359-5db9a580628b" (UID: "e4a5663f-2ce9-417b-a359-5db9a580628b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.309415 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k26g9\" (UniqueName: \"kubernetes.io/projected/e4a5663f-2ce9-417b-a359-5db9a580628b-kube-api-access-k26g9\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.309451 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.309463 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.309471 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4a5663f-2ce9-417b-a359-5db9a580628b-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.328731 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.410735 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle\") pod \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.410840 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsdc7\" (UniqueName: \"kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7\") pod \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.410888 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs\") pod \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.411020 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs\") pod \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.411050 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data\") pod \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\" (UID: \"b1aa88f0-6c9e-4704-9b28-3749b70007f5\") " Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.411654 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs" (OuterVolumeSpecName: "logs") pod "b1aa88f0-6c9e-4704-9b28-3749b70007f5" (UID: "b1aa88f0-6c9e-4704-9b28-3749b70007f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.429238 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7" (OuterVolumeSpecName: "kube-api-access-fsdc7") pod "b1aa88f0-6c9e-4704-9b28-3749b70007f5" (UID: "b1aa88f0-6c9e-4704-9b28-3749b70007f5"). InnerVolumeSpecName "kube-api-access-fsdc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.495705 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1aa88f0-6c9e-4704-9b28-3749b70007f5" (UID: "b1aa88f0-6c9e-4704-9b28-3749b70007f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.511466 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data" (OuterVolumeSpecName: "config-data") pod "b1aa88f0-6c9e-4704-9b28-3749b70007f5" (UID: "b1aa88f0-6c9e-4704-9b28-3749b70007f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.512673 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.512687 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsdc7\" (UniqueName: \"kubernetes.io/projected/b1aa88f0-6c9e-4704-9b28-3749b70007f5-kube-api-access-fsdc7\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.512700 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1aa88f0-6c9e-4704-9b28-3749b70007f5-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.512709 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.534532 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b1aa88f0-6c9e-4704-9b28-3749b70007f5" (UID: "b1aa88f0-6c9e-4704-9b28-3749b70007f5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.614930 4644 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1aa88f0-6c9e-4704-9b28-3749b70007f5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.664959 4644 generic.go:334] "Generic (PLEG): container finished" podID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" containerID="c1b0b95035b014da142748632628ad35524250dcea068d7091bb757864c2d8e3" exitCode=0 Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.665023 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12","Type":"ContainerDied","Data":"c1b0b95035b014da142748632628ad35524250dcea068d7091bb757864c2d8e3"} Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667528 4644 generic.go:334] "Generic (PLEG): container finished" podID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerID="aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" exitCode=0 Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667656 4644 generic.go:334] "Generic (PLEG): container finished" podID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerID="1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" exitCode=143 Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667787 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerDied","Data":"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd"} Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667908 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerDied","Data":"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154"} Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667997 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1aa88f0-6c9e-4704-9b28-3749b70007f5","Type":"ContainerDied","Data":"89535e287b62ecbee81a1423103a3ff2459a075b0be80016cd52f27f3246979c"} Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667924 4644 scope.go:117] "RemoveContainer" containerID="aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.667913 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.670246 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" event={"ID":"e4a5663f-2ce9-417b-a359-5db9a580628b","Type":"ContainerDied","Data":"78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918"} Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.670296 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78573e42e25d026c68c2e9b4273909c460d283a300943a825577705ddc43e918" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.670299 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7bsw6" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.691427 4644 scope.go:117] "RemoveContainer" containerID="1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.723550 4644 scope.go:117] "RemoveContainer" containerID="aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.723986 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd\": container with ID starting with aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd not found: ID does not exist" containerID="aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724023 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd"} err="failed to get container status \"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd\": rpc error: code = NotFound desc = could not find container \"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd\": container with ID starting with aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd not found: ID does not exist" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724051 4644 scope.go:117] "RemoveContainer" containerID="1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.724289 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154\": container with ID starting with 1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154 not found: ID does not exist" containerID="1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724317 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154"} err="failed to get container status \"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154\": rpc error: code = NotFound desc = could not find container \"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154\": container with ID starting with 1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154 not found: ID does not exist" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724351 4644 scope.go:117] "RemoveContainer" containerID="aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724568 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd"} err="failed to get container status \"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd\": rpc error: code = NotFound desc = could not find container \"aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd\": container with ID starting with aa57a500bd04e22f65bce1ac0d137ff8e4e117c5af4da4d9722d1783f3b8f0dd not found: ID does not exist" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724592 4644 scope.go:117] "RemoveContainer" containerID="1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.724789 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154"} err="failed to get container status \"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154\": rpc error: code = NotFound desc = could not find container \"1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154\": container with ID starting with 1be406bef9e10b758ed64998c8f0d1dfaed6f62de7a2d6998c1bfd618aa81154 not found: ID does not exist" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.737439 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.752580 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.759598 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.759927 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bbfe4b-94f7-4d67-9486-84fe0d0148a7" containerName="nova-manage" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.759942 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bbfe4b-94f7-4d67-9486-84fe0d0148a7" containerName="nova-manage" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.759960 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="dnsmasq-dns" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.759967 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="dnsmasq-dns" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.759976 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-metadata" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.759982 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-metadata" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.759998 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-log" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760003 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-log" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.760013 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a5663f-2ce9-417b-a359-5db9a580628b" containerName="nova-cell1-conductor-db-sync" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760018 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a5663f-2ce9-417b-a359-5db9a580628b" containerName="nova-cell1-conductor-db-sync" Feb 04 09:02:55 crc kubenswrapper[4644]: E0204 09:02:55.760036 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="init" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760042 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="init" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760195 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-log" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760207 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84d00ec-439f-4e2a-8c88-290eb2a194ac" containerName="dnsmasq-dns" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760216 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bbfe4b-94f7-4d67-9486-84fe0d0148a7" containerName="nova-manage" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760227 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" containerName="nova-metadata-metadata" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.760240 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a5663f-2ce9-417b-a359-5db9a580628b" containerName="nova-cell1-conductor-db-sync" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.761112 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.765722 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.765910 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.786526 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.799042 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.800532 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.802090 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.822797 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.823870 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q22kj\" (UniqueName: \"kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.823912 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.823939 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.824003 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.824032 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.925815 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926119 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926221 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926426 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926459 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/36639dbd-0602-44cf-a535-51d69170e6c5-kube-api-access-bsg7z\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926491 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q22kj\" (UniqueName: \"kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926599 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926650 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.926966 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.929631 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.929851 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.930924 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:55 crc kubenswrapper[4644]: I0204 09:02:55.942794 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q22kj\" (UniqueName: \"kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj\") pod \"nova-metadata-0\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " pod="openstack/nova-metadata-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.028503 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.028543 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/36639dbd-0602-44cf-a535-51d69170e6c5-kube-api-access-bsg7z\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.028652 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.033631 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.033660 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36639dbd-0602-44cf-a535-51d69170e6c5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.051880 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/36639dbd-0602-44cf-a535-51d69170e6c5-kube-api-access-bsg7z\") pod \"nova-cell1-conductor-0\" (UID: \"36639dbd-0602-44cf-a535-51d69170e6c5\") " pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.084411 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.132541 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.372723 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.435144 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") pod \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.435275 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data\") pod \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.435530 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2g7\" (UniqueName: \"kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7\") pod \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.443826 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7" (OuterVolumeSpecName: "kube-api-access-kc2g7") pod "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" (UID: "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12"). InnerVolumeSpecName "kube-api-access-kc2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:56 crc kubenswrapper[4644]: E0204 09:02:56.462864 4644 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle podName:b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12 nodeName:}" failed. No retries permitted until 2026-02-04 09:02:56.962832741 +0000 UTC m=+1287.002890496 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle") pod "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" (UID: "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12") : error deleting /var/lib/kubelet/pods/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12/volume-subpaths: remove /var/lib/kubelet/pods/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12/volume-subpaths: no such file or directory Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.465627 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data" (OuterVolumeSpecName: "config-data") pod "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" (UID: "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.537496 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2g7\" (UniqueName: \"kubernetes.io/projected/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-kube-api-access-kc2g7\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.537537 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.670447 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1aa88f0-6c9e-4704-9b28-3749b70007f5" path="/var/lib/kubelet/pods/b1aa88f0-6c9e-4704-9b28-3749b70007f5/volumes" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.680613 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12","Type":"ContainerDied","Data":"d4748b16aaad17f67060a2cc304eb1e2bc289a5595ab70109abe7e6a316d762f"} Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.680679 4644 scope.go:117] "RemoveContainer" containerID="c1b0b95035b014da142748632628ad35524250dcea068d7091bb757864c2d8e3" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.680808 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.740670 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 09:02:56 crc kubenswrapper[4644]: I0204 09:02:56.759519 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.047422 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") pod \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\" (UID: \"b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12\") " Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.050993 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" (UID: "b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.149537 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.368999 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.376783 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.410086 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:57 crc kubenswrapper[4644]: E0204 09:02:57.410663 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" containerName="nova-scheduler-scheduler" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.410689 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" containerName="nova-scheduler-scheduler" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.410912 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" containerName="nova-scheduler-scheduler" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.411817 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.415079 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.424289 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.455090 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.455206 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.455314 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckc4\" (UniqueName: \"kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.558495 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.558913 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckc4\" (UniqueName: \"kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.558972 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.563533 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.563788 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.610136 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckc4\" (UniqueName: \"kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4\") pod \"nova-scheduler-0\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.677526 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.713259 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerStarted","Data":"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.713299 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerStarted","Data":"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.713308 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerStarted","Data":"288ba4f4f4aed280458fbd7f1575e6491634807c6b9345461b1bfec2bbcfa51a"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.717313 4644 generic.go:334] "Generic (PLEG): container finished" podID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerID="0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c" exitCode=0 Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.717375 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerDied","Data":"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.717392 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb","Type":"ContainerDied","Data":"5e8847ba5eed4d6cbee509047c05637388b9dc8203e8c00b603db04de4a1e5c6"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.717407 4644 scope.go:117] "RemoveContainer" containerID="0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.717480 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.729998 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36639dbd-0602-44cf-a535-51d69170e6c5","Type":"ContainerStarted","Data":"d29783cbd2722de5e5eabfb57d839ac72765a6ec98c7d36fc80ab63f8bf49788"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.730044 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36639dbd-0602-44cf-a535-51d69170e6c5","Type":"ContainerStarted","Data":"9dc0eead3c8f8cd91a15fce75520e4181d07c045d10736d3b8ea9f9555eb0a0c"} Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.730811 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.730945 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.754108 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.754090135 podStartE2EDuration="2.754090135s" podCreationTimestamp="2026-02-04 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:57.739277334 +0000 UTC m=+1287.779335089" watchObservedRunningTime="2026-02-04 09:02:57.754090135 +0000 UTC m=+1287.794147890" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.762848 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs\") pod \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.763123 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle\") pod \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.763169 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data\") pod \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.763200 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g9lq\" (UniqueName: \"kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq\") pod \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\" (UID: \"ec6eae1c-78bb-4962-a005-7fc63b1eaeeb\") " Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.765705 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs" (OuterVolumeSpecName: "logs") pod "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" (UID: "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.772496 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq" (OuterVolumeSpecName: "kube-api-access-9g9lq") pod "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" (UID: "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb"). InnerVolumeSpecName "kube-api-access-9g9lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.775603 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.775585938 podStartE2EDuration="2.775585938s" podCreationTimestamp="2026-02-04 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:57.768742642 +0000 UTC m=+1287.808800397" watchObservedRunningTime="2026-02-04 09:02:57.775585938 +0000 UTC m=+1287.815643693" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.784175 4644 scope.go:117] "RemoveContainer" containerID="2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.807012 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" (UID: "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.823177 4644 scope.go:117] "RemoveContainer" containerID="0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c" Feb 04 09:02:57 crc kubenswrapper[4644]: E0204 09:02:57.823806 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c\": container with ID starting with 0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c not found: ID does not exist" containerID="0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.823851 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c"} err="failed to get container status \"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c\": rpc error: code = NotFound desc = could not find container \"0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c\": container with ID starting with 0b8ae2243af93c2ae3b5b702c1e42380a0269254d284bbeb2b852eb7464b173c not found: ID does not exist" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.823864 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data" (OuterVolumeSpecName: "config-data") pod "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" (UID: "ec6eae1c-78bb-4962-a005-7fc63b1eaeeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.823877 4644 scope.go:117] "RemoveContainer" containerID="2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0" Feb 04 09:02:57 crc kubenswrapper[4644]: E0204 09:02:57.824212 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0\": container with ID starting with 2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0 not found: ID does not exist" containerID="2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.824242 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0"} err="failed to get container status \"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0\": rpc error: code = NotFound desc = could not find container \"2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0\": container with ID starting with 2208cd0151ce80dd8ff53a1eb1f8d48edc7d28433cbc79f75165eca5cec596e0 not found: ID does not exist" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.866667 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.866699 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.866712 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:57 crc kubenswrapper[4644]: I0204 09:02:57.866723 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g9lq\" (UniqueName: \"kubernetes.io/projected/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb-kube-api-access-9g9lq\") on node \"crc\" DevicePath \"\"" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.103194 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.120833 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.131370 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:58 crc kubenswrapper[4644]: E0204 09:02:58.131801 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-api" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.131819 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-api" Feb 04 09:02:58 crc kubenswrapper[4644]: E0204 09:02:58.131853 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-log" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.131862 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-log" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.132046 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-api" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.132068 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" containerName="nova-api-log" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.133024 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.136747 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.139594 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.171888 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.171937 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.172092 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.172299 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tk9\" (UniqueName: \"kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.244936 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.273576 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tk9\" (UniqueName: \"kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.273893 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.273926 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.273993 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.274810 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.280310 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.284614 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.291733 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tk9\" (UniqueName: \"kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9\") pod \"nova-api-0\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.446583 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.752182 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12" path="/var/lib/kubelet/pods/b4b2fe49-5b7c-48ec-bcb7-5af9cef1eb12/volumes" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.752963 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6eae1c-78bb-4962-a005-7fc63b1eaeeb" path="/var/lib/kubelet/pods/ec6eae1c-78bb-4962-a005-7fc63b1eaeeb/volumes" Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.793721 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eda9f246-8001-4ff0-bfd4-660adb11240c","Type":"ContainerStarted","Data":"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef"} Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.793752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eda9f246-8001-4ff0-bfd4-660adb11240c","Type":"ContainerStarted","Data":"431c910d145cfced994c415c09a23d4faac0ff551552f58e1e97072e32274801"} Feb 04 09:02:58 crc kubenswrapper[4644]: I0204 09:02:58.826038 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.82600407 podStartE2EDuration="1.82600407s" podCreationTimestamp="2026-02-04 09:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:58.82455664 +0000 UTC m=+1288.864614395" watchObservedRunningTime="2026-02-04 09:02:58.82600407 +0000 UTC m=+1288.866061825" Feb 04 09:02:59 crc kubenswrapper[4644]: I0204 09:02:59.108940 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:02:59 crc kubenswrapper[4644]: I0204 09:02:59.803213 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerStarted","Data":"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b"} Feb 04 09:02:59 crc kubenswrapper[4644]: I0204 09:02:59.803620 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerStarted","Data":"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5"} Feb 04 09:02:59 crc kubenswrapper[4644]: I0204 09:02:59.803641 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerStarted","Data":"03ca5615f5b8b7e0fcdcfecc0aee75014ffc23dbe7422921d294cf65fc25398e"} Feb 04 09:02:59 crc kubenswrapper[4644]: I0204 09:02:59.829024 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.828997905 podStartE2EDuration="1.828997905s" podCreationTimestamp="2026-02-04 09:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:02:59.821994354 +0000 UTC m=+1289.862052109" watchObservedRunningTime="2026-02-04 09:02:59.828997905 +0000 UTC m=+1289.869055680" Feb 04 09:03:01 crc kubenswrapper[4644]: I0204 09:03:01.085782 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:03:01 crc kubenswrapper[4644]: I0204 09:03:01.086920 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:03:01 crc kubenswrapper[4644]: I0204 09:03:01.163596 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 04 09:03:02 crc kubenswrapper[4644]: I0204 09:03:02.731629 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 09:03:06 crc kubenswrapper[4644]: I0204 09:03:06.085917 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 09:03:06 crc kubenswrapper[4644]: I0204 09:03:06.086379 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.101609 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.101630 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.731646 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.774473 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.873652 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 09:03:07 crc kubenswrapper[4644]: I0204 09:03:07.975930 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 04 09:03:08 crc kubenswrapper[4644]: I0204 09:03:08.446967 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:08 crc kubenswrapper[4644]: I0204 09:03:08.447110 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:09 crc kubenswrapper[4644]: I0204 09:03:09.529557 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:09 crc kubenswrapper[4644]: I0204 09:03:09.529609 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.094016 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.096790 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.103263 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.887117 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.979239 4644 generic.go:334] "Generic (PLEG): container finished" podID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" containerID="e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb" exitCode=137 Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.979293 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.979304 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"287914be-51e2-4982-a7e0-9e3cc4bfc1aa","Type":"ContainerDied","Data":"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb"} Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.979376 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"287914be-51e2-4982-a7e0-9e3cc4bfc1aa","Type":"ContainerDied","Data":"7374a9cf51e287a17e0bbf29de6a33d61aa8f6c0d24e353eed0f45e7eff603d0"} Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.979399 4644 scope.go:117] "RemoveContainer" containerID="e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb" Feb 04 09:03:16 crc kubenswrapper[4644]: I0204 09:03:16.984626 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.007616 4644 scope.go:117] "RemoveContainer" containerID="e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb" Feb 04 09:03:17 crc kubenswrapper[4644]: E0204 09:03:17.008212 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb\": container with ID starting with e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb not found: ID does not exist" containerID="e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.008375 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb"} err="failed to get container status \"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb\": rpc error: code = NotFound desc = could not find container \"e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb\": container with ID starting with e8908909ab1a8004c9f4ea62f192e868522bd2f4014e09ab7762b6513e7d16fb not found: ID does not exist" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.032438 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99nl\" (UniqueName: \"kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl\") pod \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.032808 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data\") pod \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.033006 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle\") pod \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\" (UID: \"287914be-51e2-4982-a7e0-9e3cc4bfc1aa\") " Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.039785 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl" (OuterVolumeSpecName: "kube-api-access-v99nl") pod "287914be-51e2-4982-a7e0-9e3cc4bfc1aa" (UID: "287914be-51e2-4982-a7e0-9e3cc4bfc1aa"). InnerVolumeSpecName "kube-api-access-v99nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.074210 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "287914be-51e2-4982-a7e0-9e3cc4bfc1aa" (UID: "287914be-51e2-4982-a7e0-9e3cc4bfc1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.074766 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data" (OuterVolumeSpecName: "config-data") pod "287914be-51e2-4982-a7e0-9e3cc4bfc1aa" (UID: "287914be-51e2-4982-a7e0-9e3cc4bfc1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.137640 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.137681 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99nl\" (UniqueName: \"kubernetes.io/projected/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-kube-api-access-v99nl\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.137697 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287914be-51e2-4982-a7e0-9e3cc4bfc1aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.311648 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.322144 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.340103 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:03:17 crc kubenswrapper[4644]: E0204 09:03:17.340600 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.340635 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.340903 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.341649 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.346287 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.346776 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.347035 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.352983 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.443694 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.444685 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.444723 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.444804 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrpp\" (UniqueName: \"kubernetes.io/projected/3ed25922-57d7-4a67-828a-6a07c733ba91-kube-api-access-nbrpp\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.444832 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.546207 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.547247 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.547689 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.547795 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.547961 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrpp\" (UniqueName: \"kubernetes.io/projected/3ed25922-57d7-4a67-828a-6a07c733ba91-kube-api-access-nbrpp\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.549861 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.551129 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.554872 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.559514 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed25922-57d7-4a67-828a-6a07c733ba91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.564409 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrpp\" (UniqueName: \"kubernetes.io/projected/3ed25922-57d7-4a67-828a-6a07c733ba91-kube-api-access-nbrpp\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ed25922-57d7-4a67-828a-6a07c733ba91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:17 crc kubenswrapper[4644]: I0204 09:03:17.661617 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.130861 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 09:03:18 crc kubenswrapper[4644]: W0204 09:03:18.132896 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ed25922_57d7_4a67_828a_6a07c733ba91.slice/crio-379bfdd75d01f228d8fcfa8bd8e3fd9d3fe291c86c178c145737fb0be4e91d2b WatchSource:0}: Error finding container 379bfdd75d01f228d8fcfa8bd8e3fd9d3fe291c86c178c145737fb0be4e91d2b: Status 404 returned error can't find the container with id 379bfdd75d01f228d8fcfa8bd8e3fd9d3fe291c86c178c145737fb0be4e91d2b Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.450794 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.452210 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.452535 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.452580 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.455739 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.463216 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.691999 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287914be-51e2-4982-a7e0-9e3cc4bfc1aa" path="/var/lib/kubelet/pods/287914be-51e2-4982-a7e0-9e3cc4bfc1aa/volumes" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.692886 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.694308 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.711778 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781398 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781527 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwf9\" (UniqueName: \"kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781586 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781638 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781652 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.781696 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.884361 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwf9\" (UniqueName: \"kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.884721 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.884825 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.884894 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.884975 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.885080 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.886107 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.886136 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.886209 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.886711 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.886740 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.913296 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwf9\" (UniqueName: \"kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9\") pod \"dnsmasq-dns-cd5cbd7b9-c6mnd\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.999118 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ed25922-57d7-4a67-828a-6a07c733ba91","Type":"ContainerStarted","Data":"3edc23d3a1d5b9404ce15d48942bec3f5d0f75637be8111ca858051a9f480917"} Feb 04 09:03:18 crc kubenswrapper[4644]: I0204 09:03:18.999401 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ed25922-57d7-4a67-828a-6a07c733ba91","Type":"ContainerStarted","Data":"379bfdd75d01f228d8fcfa8bd8e3fd9d3fe291c86c178c145737fb0be4e91d2b"} Feb 04 09:03:19 crc kubenswrapper[4644]: I0204 09:03:19.020866 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:19 crc kubenswrapper[4644]: I0204 09:03:19.032146 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.032128955 podStartE2EDuration="2.032128955s" podCreationTimestamp="2026-02-04 09:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:19.024043526 +0000 UTC m=+1309.064101281" watchObservedRunningTime="2026-02-04 09:03:19.032128955 +0000 UTC m=+1309.072186710" Feb 04 09:03:19 crc kubenswrapper[4644]: W0204 09:03:19.602372 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d82fcd_bb75_4881_b500_a77feab77cfc.slice/crio-29f3f4c3f370ad013304a5fdb2c6cec69716c672bdfe70da106b8034289055b9 WatchSource:0}: Error finding container 29f3f4c3f370ad013304a5fdb2c6cec69716c672bdfe70da106b8034289055b9: Status 404 returned error can't find the container with id 29f3f4c3f370ad013304a5fdb2c6cec69716c672bdfe70da106b8034289055b9 Feb 04 09:03:19 crc kubenswrapper[4644]: I0204 09:03:19.608643 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:03:20 crc kubenswrapper[4644]: I0204 09:03:20.009092 4644 generic.go:334] "Generic (PLEG): container finished" podID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerID="0987936515dd3bba8336abc4f2143e4de489a0c0e57eb1384700a2431bd88988" exitCode=0 Feb 04 09:03:20 crc kubenswrapper[4644]: I0204 09:03:20.009182 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" event={"ID":"63d82fcd-bb75-4881-b500-a77feab77cfc","Type":"ContainerDied","Data":"0987936515dd3bba8336abc4f2143e4de489a0c0e57eb1384700a2431bd88988"} Feb 04 09:03:20 crc kubenswrapper[4644]: I0204 09:03:20.009488 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" event={"ID":"63d82fcd-bb75-4881-b500-a77feab77cfc","Type":"ContainerStarted","Data":"29f3f4c3f370ad013304a5fdb2c6cec69716c672bdfe70da106b8034289055b9"} Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.020191 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" event={"ID":"63d82fcd-bb75-4881-b500-a77feab77cfc","Type":"ContainerStarted","Data":"84608fa017f5a390514c855df5d03567d32bc3e1a6e579c5c9a7352c1505d86b"} Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.020818 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.049284 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" podStartSLOduration=3.049259607 podStartE2EDuration="3.049259607s" podCreationTimestamp="2026-02-04 09:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:21.041195508 +0000 UTC m=+1311.081253263" watchObservedRunningTime="2026-02-04 09:03:21.049259607 +0000 UTC m=+1311.089317362" Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.319754 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.319999 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-log" containerID="cri-o://3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5" gracePeriod=30 Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.320131 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-api" containerID="cri-o://9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b" gracePeriod=30 Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.948910 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.949458 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-central-agent" containerID="cri-o://77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8" gracePeriod=30 Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.949530 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="proxy-httpd" containerID="cri-o://b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f" gracePeriod=30 Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.949577 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="sg-core" containerID="cri-o://f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd" gracePeriod=30 Feb 04 09:03:21 crc kubenswrapper[4644]: I0204 09:03:21.949568 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-notification-agent" containerID="cri-o://23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb" gracePeriod=30 Feb 04 09:03:22 crc kubenswrapper[4644]: I0204 09:03:22.030726 4644 generic.go:334] "Generic (PLEG): container finished" podID="64625614-25ec-4c79-9250-b80273cb0b44" containerID="3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5" exitCode=143 Feb 04 09:03:22 crc kubenswrapper[4644]: I0204 09:03:22.031604 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerDied","Data":"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5"} Feb 04 09:03:22 crc kubenswrapper[4644]: I0204 09:03:22.670261 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041400 4644 generic.go:334] "Generic (PLEG): container finished" podID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerID="b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f" exitCode=0 Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041698 4644 generic.go:334] "Generic (PLEG): container finished" podID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerID="f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd" exitCode=2 Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041711 4644 generic.go:334] "Generic (PLEG): container finished" podID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerID="77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8" exitCode=0 Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041472 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerDied","Data":"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f"} Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041747 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerDied","Data":"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd"} Feb 04 09:03:23 crc kubenswrapper[4644]: I0204 09:03:23.041761 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerDied","Data":"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8"} Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.005515 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.072800 4644 generic.go:334] "Generic (PLEG): container finished" podID="64625614-25ec-4c79-9250-b80273cb0b44" containerID="9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b" exitCode=0 Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.073156 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.074102 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerDied","Data":"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b"} Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.074244 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64625614-25ec-4c79-9250-b80273cb0b44","Type":"ContainerDied","Data":"03ca5615f5b8b7e0fcdcfecc0aee75014ffc23dbe7422921d294cf65fc25398e"} Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.074364 4644 scope.go:117] "RemoveContainer" containerID="9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.108910 4644 scope.go:117] "RemoveContainer" containerID="3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.116029 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle\") pod \"64625614-25ec-4c79-9250-b80273cb0b44\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.116466 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs\") pod \"64625614-25ec-4c79-9250-b80273cb0b44\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.116756 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data\") pod \"64625614-25ec-4c79-9250-b80273cb0b44\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.116913 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tk9\" (UniqueName: \"kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9\") pod \"64625614-25ec-4c79-9250-b80273cb0b44\" (UID: \"64625614-25ec-4c79-9250-b80273cb0b44\") " Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.120106 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs" (OuterVolumeSpecName: "logs") pod "64625614-25ec-4c79-9250-b80273cb0b44" (UID: "64625614-25ec-4c79-9250-b80273cb0b44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.146034 4644 scope.go:117] "RemoveContainer" containerID="9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.146592 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9" (OuterVolumeSpecName: "kube-api-access-p5tk9") pod "64625614-25ec-4c79-9250-b80273cb0b44" (UID: "64625614-25ec-4c79-9250-b80273cb0b44"). InnerVolumeSpecName "kube-api-access-p5tk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:25 crc kubenswrapper[4644]: E0204 09:03:25.149257 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b\": container with ID starting with 9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b not found: ID does not exist" containerID="9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.149294 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b"} err="failed to get container status \"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b\": rpc error: code = NotFound desc = could not find container \"9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b\": container with ID starting with 9bef8085adab55e83ced86ed2ce826e4b289095a376875b14bb45cd14f95983b not found: ID does not exist" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.149317 4644 scope.go:117] "RemoveContainer" containerID="3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5" Feb 04 09:03:25 crc kubenswrapper[4644]: E0204 09:03:25.149657 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5\": container with ID starting with 3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5 not found: ID does not exist" containerID="3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.149681 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5"} err="failed to get container status \"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5\": rpc error: code = NotFound desc = could not find container \"3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5\": container with ID starting with 3018a86bc97112d1fcdf9f8bca94e3c21055c93cc660fd0fc559439070e272b5 not found: ID does not exist" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.162776 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data" (OuterVolumeSpecName: "config-data") pod "64625614-25ec-4c79-9250-b80273cb0b44" (UID: "64625614-25ec-4c79-9250-b80273cb0b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.179695 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64625614-25ec-4c79-9250-b80273cb0b44" (UID: "64625614-25ec-4c79-9250-b80273cb0b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.219547 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.219585 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tk9\" (UniqueName: \"kubernetes.io/projected/64625614-25ec-4c79-9250-b80273cb0b44-kube-api-access-p5tk9\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.219598 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64625614-25ec-4c79-9250-b80273cb0b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.219606 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64625614-25ec-4c79-9250-b80273cb0b44-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.420159 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.436285 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.458408 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:25 crc kubenswrapper[4644]: E0204 09:03:25.458936 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-log" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.458961 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-log" Feb 04 09:03:25 crc kubenswrapper[4644]: E0204 09:03:25.458988 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-api" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.458999 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-api" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.459242 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-log" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.459264 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="64625614-25ec-4c79-9250-b80273cb0b44" containerName="nova-api-api" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.460525 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.469758 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.469875 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.469948 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.500583 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.628882 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.628940 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.628973 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.628995 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.629152 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.629220 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.731393 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.731709 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.731871 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.731981 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.732159 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.732245 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.732948 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.748029 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.748109 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.749623 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.751746 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.760268 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5\") pod \"nova-api-0\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " pod="openstack/nova-api-0" Feb 04 09:03:25 crc kubenswrapper[4644]: I0204 09:03:25.788637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:26 crc kubenswrapper[4644]: I0204 09:03:26.373551 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:26 crc kubenswrapper[4644]: I0204 09:03:26.689745 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64625614-25ec-4c79-9250-b80273cb0b44" path="/var/lib/kubelet/pods/64625614-25ec-4c79-9250-b80273cb0b44/volumes" Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.118057 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerStarted","Data":"09b1294a05fa778de3acf6ef8efe8aae0b263ea1a2b383f62ff9dbbfb5efff9e"} Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.118819 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerStarted","Data":"df6f829c758f962e9320cc979fbe36d67e9a53acebf0229f610c1282a63aeb7b"} Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.118960 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerStarted","Data":"7cd0027a2e864661f01a8e5931f8e8132ce7ea5ddeb04f94e80990bd26c99d3a"} Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.661973 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.690155 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:27 crc kubenswrapper[4644]: I0204 09:03:27.720056 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.720031763 podStartE2EDuration="2.720031763s" podCreationTimestamp="2026-02-04 09:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:27.162073979 +0000 UTC m=+1317.202131744" watchObservedRunningTime="2026-02-04 09:03:27.720031763 +0000 UTC m=+1317.760089518" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.149533 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.363261 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jdgt7"] Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.364724 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.366644 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.366790 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.373807 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jdgt7"] Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.491484 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.491587 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjnxr\" (UniqueName: \"kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.491622 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.491646 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.592862 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.592934 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.594071 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.594538 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjnxr\" (UniqueName: \"kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.603477 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.604702 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.613163 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.616377 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjnxr\" (UniqueName: \"kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr\") pod \"nova-cell1-cell-mapping-jdgt7\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.702315 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.722272 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900491 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900622 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900659 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900677 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900714 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900768 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900847 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4pw\" (UniqueName: \"kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.900877 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts\") pod \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\" (UID: \"bea8f471-d501-4dd4-b1dc-3fa921ba12a8\") " Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.902429 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.904215 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.909184 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw" (OuterVolumeSpecName: "kube-api-access-rc4pw") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "kube-api-access-rc4pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.924579 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts" (OuterVolumeSpecName: "scripts") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.953141 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:28 crc kubenswrapper[4644]: I0204 09:03:28.990339 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003497 4644 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003692 4644 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003750 4644 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003852 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4pw\" (UniqueName: \"kubernetes.io/projected/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-kube-api-access-rc4pw\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003928 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.003986 4644 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.011575 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.024227 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.056073 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data" (OuterVolumeSpecName: "config-data") pod "bea8f471-d501-4dd4-b1dc-3fa921ba12a8" (UID: "bea8f471-d501-4dd4-b1dc-3fa921ba12a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.094946 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.095201 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="dnsmasq-dns" containerID="cri-o://f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc" gracePeriod=10 Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.105866 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.105913 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea8f471-d501-4dd4-b1dc-3fa921ba12a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.149070 4644 generic.go:334] "Generic (PLEG): container finished" podID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerID="23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb" exitCode=0 Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.149148 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerDied","Data":"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb"} Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.149168 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.149221 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bea8f471-d501-4dd4-b1dc-3fa921ba12a8","Type":"ContainerDied","Data":"9f9f13be6f817cfc6117f641ee0d76b0c4072ff0ad943c983099188e265a135d"} Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.149242 4644 scope.go:117] "RemoveContainer" containerID="b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.300078 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jdgt7"] Feb 04 09:03:29 crc kubenswrapper[4644]: W0204 09:03:29.329249 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5c3a3a_8314_4a01_b164_038d9247e569.slice/crio-27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471 WatchSource:0}: Error finding container 27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471: Status 404 returned error can't find the container with id 27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471 Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.347683 4644 scope.go:117] "RemoveContainer" containerID="f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.487868 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.493886 4644 scope.go:117] "RemoveContainer" containerID="23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.496268 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528315 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.528722 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-central-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528742 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-central-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.528772 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="sg-core" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528781 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="sg-core" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.528792 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-notification-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528800 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-notification-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.528816 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="proxy-httpd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528821 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="proxy-httpd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.528991 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="sg-core" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.529009 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="proxy-httpd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.529020 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-notification-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.529034 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" containerName="ceilometer-central-agent" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.530709 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.535152 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.535460 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.535605 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.553563 4644 scope.go:117] "RemoveContainer" containerID="77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.575343 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.604195 4644 scope.go:117] "RemoveContainer" containerID="b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.604947 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f\": container with ID starting with b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f not found: ID does not exist" containerID="b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.604998 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f"} err="failed to get container status \"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f\": rpc error: code = NotFound desc = could not find container \"b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f\": container with ID starting with b2a623750dbea2376073a7afb76454a49afb1e2ad4e20c6126cda3f403e9c55f not found: ID does not exist" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.605035 4644 scope.go:117] "RemoveContainer" containerID="f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.606713 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd\": container with ID starting with f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd not found: ID does not exist" containerID="f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.606746 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd"} err="failed to get container status \"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd\": rpc error: code = NotFound desc = could not find container \"f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd\": container with ID starting with f038bab204926288e14e7cc7919e3f0353add7b892c4537df22d9742b3f988dd not found: ID does not exist" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.606769 4644 scope.go:117] "RemoveContainer" containerID="23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.607013 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb\": container with ID starting with 23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb not found: ID does not exist" containerID="23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.607035 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb"} err="failed to get container status \"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb\": rpc error: code = NotFound desc = could not find container \"23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb\": container with ID starting with 23d93f61d8277a5c54f4fa003d51c31f20abfc01b423cb28a0f50c7d4817c0cb not found: ID does not exist" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.607051 4644 scope.go:117] "RemoveContainer" containerID="77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8" Feb 04 09:03:29 crc kubenswrapper[4644]: E0204 09:03:29.607378 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8\": container with ID starting with 77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8 not found: ID does not exist" containerID="77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.607397 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8"} err="failed to get container status \"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8\": rpc error: code = NotFound desc = could not find container \"77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8\": container with ID starting with 77efe9f60b23bce306b19b989f24df19ff5fc7785ff431756a67bf2a5d866df8 not found: ID does not exist" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.615758 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.615809 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-config-data\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.615876 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.615923 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4qd9\" (UniqueName: \"kubernetes.io/projected/6e9334f7-37d6-49f6-9c7f-e5b301283f15-kube-api-access-h4qd9\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.615977 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-scripts\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.616015 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.616041 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.616068 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717241 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-scripts\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717302 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717321 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717411 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717464 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717484 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-config-data\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717534 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.717567 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4qd9\" (UniqueName: \"kubernetes.io/projected/6e9334f7-37d6-49f6-9c7f-e5b301283f15-kube-api-access-h4qd9\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.719122 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.719187 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9334f7-37d6-49f6-9c7f-e5b301283f15-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.723837 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.724311 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.726511 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-scripts\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.727472 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-config-data\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.729270 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9334f7-37d6-49f6-9c7f-e5b301283f15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.749154 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4qd9\" (UniqueName: \"kubernetes.io/projected/6e9334f7-37d6-49f6-9c7f-e5b301283f15-kube-api-access-h4qd9\") pod \"ceilometer-0\" (UID: \"6e9334f7-37d6-49f6-9c7f-e5b301283f15\") " pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.837452 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.869778 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.921049 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.921173 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.921253 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.938501 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpx5f\" (UniqueName: \"kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.938654 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.938710 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb\") pod \"51698a4f-9e64-41ea-9130-c197b4505acb\" (UID: \"51698a4f-9e64-41ea-9130-c197b4505acb\") " Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.964670 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f" (OuterVolumeSpecName: "kube-api-access-hpx5f") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "kube-api-access-hpx5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:29 crc kubenswrapper[4644]: I0204 09:03:29.980311 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config" (OuterVolumeSpecName: "config") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.010262 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.013041 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.027002 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.041247 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.042031 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.042112 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpx5f\" (UniqueName: \"kubernetes.io/projected/51698a4f-9e64-41ea-9130-c197b4505acb-kube-api-access-hpx5f\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.042185 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.042259 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.042005 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51698a4f-9e64-41ea-9130-c197b4505acb" (UID: "51698a4f-9e64-41ea-9130-c197b4505acb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.144116 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51698a4f-9e64-41ea-9130-c197b4505acb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.175532 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jdgt7" event={"ID":"0c5c3a3a-8314-4a01-b164-038d9247e569","Type":"ContainerStarted","Data":"2fe3556f7a3a4789fe708c36e64cfd33808338b807d85ecb8e8d3fb7a06a9067"} Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.175577 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jdgt7" event={"ID":"0c5c3a3a-8314-4a01-b164-038d9247e569","Type":"ContainerStarted","Data":"27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471"} Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.186135 4644 generic.go:334] "Generic (PLEG): container finished" podID="51698a4f-9e64-41ea-9130-c197b4505acb" containerID="f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc" exitCode=0 Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.186184 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" event={"ID":"51698a4f-9e64-41ea-9130-c197b4505acb","Type":"ContainerDied","Data":"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc"} Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.186215 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" event={"ID":"51698a4f-9e64-41ea-9130-c197b4505acb","Type":"ContainerDied","Data":"2c932d37126904e7e4d8076176e5ff61dc17f88daa6a7266702b7a5521da2219"} Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.186242 4644 scope.go:117] "RemoveContainer" containerID="f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.186250 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-fzd7f" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.222657 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jdgt7" podStartSLOduration=2.222632852 podStartE2EDuration="2.222632852s" podCreationTimestamp="2026-02-04 09:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:30.19600355 +0000 UTC m=+1320.236061315" watchObservedRunningTime="2026-02-04 09:03:30.222632852 +0000 UTC m=+1320.262690607" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.297224 4644 scope.go:117] "RemoveContainer" containerID="eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.298222 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.306683 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-fzd7f"] Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.324520 4644 scope.go:117] "RemoveContainer" containerID="f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc" Feb 04 09:03:30 crc kubenswrapper[4644]: E0204 09:03:30.332078 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc\": container with ID starting with f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc not found: ID does not exist" containerID="f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.332112 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc"} err="failed to get container status \"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc\": rpc error: code = NotFound desc = could not find container \"f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc\": container with ID starting with f841c2352826f3563a19935f2f4be27b098431000189ce7ab9190ed7927b53cc not found: ID does not exist" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.332135 4644 scope.go:117] "RemoveContainer" containerID="eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200" Feb 04 09:03:30 crc kubenswrapper[4644]: E0204 09:03:30.332731 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200\": container with ID starting with eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200 not found: ID does not exist" containerID="eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.332788 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200"} err="failed to get container status \"eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200\": rpc error: code = NotFound desc = could not find container \"eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200\": container with ID starting with eb70e390ab37a83ab27c069433b36ea11e59f6531cd387b2fcca57d0f689f200 not found: ID does not exist" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.389170 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 09:03:30 crc kubenswrapper[4644]: W0204 09:03:30.389786 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9334f7_37d6_49f6_9c7f_e5b301283f15.slice/crio-402567f9fccb582155a578d0e4ebc53ed04ca464eb0b9a3bbbf34944bf37e516 WatchSource:0}: Error finding container 402567f9fccb582155a578d0e4ebc53ed04ca464eb0b9a3bbbf34944bf37e516: Status 404 returned error can't find the container with id 402567f9fccb582155a578d0e4ebc53ed04ca464eb0b9a3bbbf34944bf37e516 Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.674067 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" path="/var/lib/kubelet/pods/51698a4f-9e64-41ea-9130-c197b4505acb/volumes" Feb 04 09:03:30 crc kubenswrapper[4644]: I0204 09:03:30.676945 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea8f471-d501-4dd4-b1dc-3fa921ba12a8" path="/var/lib/kubelet/pods/bea8f471-d501-4dd4-b1dc-3fa921ba12a8/volumes" Feb 04 09:03:31 crc kubenswrapper[4644]: I0204 09:03:31.215775 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9334f7-37d6-49f6-9c7f-e5b301283f15","Type":"ContainerStarted","Data":"70fffbf8cef2145ab1d61b02c9e961b26cd1aab9ec1527b69560c45df1da1a97"} Feb 04 09:03:31 crc kubenswrapper[4644]: I0204 09:03:31.216038 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9334f7-37d6-49f6-9c7f-e5b301283f15","Type":"ContainerStarted","Data":"402567f9fccb582155a578d0e4ebc53ed04ca464eb0b9a3bbbf34944bf37e516"} Feb 04 09:03:32 crc kubenswrapper[4644]: I0204 09:03:32.225110 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9334f7-37d6-49f6-9c7f-e5b301283f15","Type":"ContainerStarted","Data":"cc5edce223d9f0e6c461c4d9f1800eb2e2fa5a229739bcbffc59ed74e2cc8abc"} Feb 04 09:03:33 crc kubenswrapper[4644]: I0204 09:03:33.234947 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9334f7-37d6-49f6-9c7f-e5b301283f15","Type":"ContainerStarted","Data":"f9d7b94c21e3d7754f77721f8518801710157ad240751fb958a41ac47857037b"} Feb 04 09:03:35 crc kubenswrapper[4644]: I0204 09:03:35.259387 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9334f7-37d6-49f6-9c7f-e5b301283f15","Type":"ContainerStarted","Data":"6a59c3a2deb59cccfe837f9c8470eca136110a8482f2669360cf034cf11bb414"} Feb 04 09:03:35 crc kubenswrapper[4644]: I0204 09:03:35.259950 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 09:03:35 crc kubenswrapper[4644]: I0204 09:03:35.303244 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3035722610000002 podStartE2EDuration="6.303228757s" podCreationTimestamp="2026-02-04 09:03:29 +0000 UTC" firstStartedPulling="2026-02-04 09:03:30.393311251 +0000 UTC m=+1320.433369006" lastFinishedPulling="2026-02-04 09:03:34.392967747 +0000 UTC m=+1324.433025502" observedRunningTime="2026-02-04 09:03:35.299528647 +0000 UTC m=+1325.339586402" watchObservedRunningTime="2026-02-04 09:03:35.303228757 +0000 UTC m=+1325.343286512" Feb 04 09:03:35 crc kubenswrapper[4644]: I0204 09:03:35.789855 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:35 crc kubenswrapper[4644]: I0204 09:03:35.791903 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:36 crc kubenswrapper[4644]: I0204 09:03:36.273581 4644 generic.go:334] "Generic (PLEG): container finished" podID="0c5c3a3a-8314-4a01-b164-038d9247e569" containerID="2fe3556f7a3a4789fe708c36e64cfd33808338b807d85ecb8e8d3fb7a06a9067" exitCode=0 Feb 04 09:03:36 crc kubenswrapper[4644]: I0204 09:03:36.273640 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jdgt7" event={"ID":"0c5c3a3a-8314-4a01-b164-038d9247e569","Type":"ContainerDied","Data":"2fe3556f7a3a4789fe708c36e64cfd33808338b807d85ecb8e8d3fb7a06a9067"} Feb 04 09:03:36 crc kubenswrapper[4644]: I0204 09:03:36.824452 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:36 crc kubenswrapper[4644]: I0204 09:03:36.825086 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.633202 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.691900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data\") pod \"0c5c3a3a-8314-4a01-b164-038d9247e569\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.691954 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts\") pod \"0c5c3a3a-8314-4a01-b164-038d9247e569\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.692043 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjnxr\" (UniqueName: \"kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr\") pod \"0c5c3a3a-8314-4a01-b164-038d9247e569\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.692159 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle\") pod \"0c5c3a3a-8314-4a01-b164-038d9247e569\" (UID: \"0c5c3a3a-8314-4a01-b164-038d9247e569\") " Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.698290 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr" (OuterVolumeSpecName: "kube-api-access-gjnxr") pod "0c5c3a3a-8314-4a01-b164-038d9247e569" (UID: "0c5c3a3a-8314-4a01-b164-038d9247e569"). InnerVolumeSpecName "kube-api-access-gjnxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.714578 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts" (OuterVolumeSpecName: "scripts") pod "0c5c3a3a-8314-4a01-b164-038d9247e569" (UID: "0c5c3a3a-8314-4a01-b164-038d9247e569"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.768448 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c5c3a3a-8314-4a01-b164-038d9247e569" (UID: "0c5c3a3a-8314-4a01-b164-038d9247e569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.771403 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data" (OuterVolumeSpecName: "config-data") pod "0c5c3a3a-8314-4a01-b164-038d9247e569" (UID: "0c5c3a3a-8314-4a01-b164-038d9247e569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.795417 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.795447 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.795457 4644 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5c3a3a-8314-4a01-b164-038d9247e569-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:37 crc kubenswrapper[4644]: I0204 09:03:37.795468 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjnxr\" (UniqueName: \"kubernetes.io/projected/0c5c3a3a-8314-4a01-b164-038d9247e569-kube-api-access-gjnxr\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.292286 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jdgt7" event={"ID":"0c5c3a3a-8314-4a01-b164-038d9247e569","Type":"ContainerDied","Data":"27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471"} Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.292347 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c615e7db4cf5c082c9f715b725a4cfdea4c8fe9ded58b3d38272d5fe128471" Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.292402 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jdgt7" Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.480093 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.480555 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-log" containerID="cri-o://df6f829c758f962e9320cc979fbe36d67e9a53acebf0229f610c1282a63aeb7b" gracePeriod=30 Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.480648 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-api" containerID="cri-o://09b1294a05fa778de3acf6ef8efe8aae0b263ea1a2b383f62ff9dbbfb5efff9e" gracePeriod=30 Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.491494 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.491706 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="eda9f246-8001-4ff0-bfd4-660adb11240c" containerName="nova-scheduler-scheduler" containerID="cri-o://da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef" gracePeriod=30 Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.546554 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.546806 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" containerID="cri-o://5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752" gracePeriod=30 Feb 04 09:03:38 crc kubenswrapper[4644]: I0204 09:03:38.547359 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" containerID="cri-o://ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a" gracePeriod=30 Feb 04 09:03:39 crc kubenswrapper[4644]: I0204 09:03:39.301479 4644 generic.go:334] "Generic (PLEG): container finished" podID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerID="5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752" exitCode=143 Feb 04 09:03:39 crc kubenswrapper[4644]: I0204 09:03:39.301567 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerDied","Data":"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752"} Feb 04 09:03:39 crc kubenswrapper[4644]: I0204 09:03:39.303796 4644 generic.go:334] "Generic (PLEG): container finished" podID="665b396a-9470-4142-af29-a1de2f961433" containerID="df6f829c758f962e9320cc979fbe36d67e9a53acebf0229f610c1282a63aeb7b" exitCode=143 Feb 04 09:03:39 crc kubenswrapper[4644]: I0204 09:03:39.303848 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerDied","Data":"df6f829c758f962e9320cc979fbe36d67e9a53acebf0229f610c1282a63aeb7b"} Feb 04 09:03:41 crc kubenswrapper[4644]: I0204 09:03:41.950701 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:43190->10.217.0.196:8775: read: connection reset by peer" Feb 04 09:03:41 crc kubenswrapper[4644]: I0204 09:03:41.950749 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:43192->10.217.0.196:8775: read: connection reset by peer" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.085552 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.172896 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle\") pod \"eda9f246-8001-4ff0-bfd4-660adb11240c\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.173094 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data\") pod \"eda9f246-8001-4ff0-bfd4-660adb11240c\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.173146 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckc4\" (UniqueName: \"kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4\") pod \"eda9f246-8001-4ff0-bfd4-660adb11240c\" (UID: \"eda9f246-8001-4ff0-bfd4-660adb11240c\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.179954 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4" (OuterVolumeSpecName: "kube-api-access-hckc4") pod "eda9f246-8001-4ff0-bfd4-660adb11240c" (UID: "eda9f246-8001-4ff0-bfd4-660adb11240c"). InnerVolumeSpecName "kube-api-access-hckc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.217210 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data" (OuterVolumeSpecName: "config-data") pod "eda9f246-8001-4ff0-bfd4-660adb11240c" (UID: "eda9f246-8001-4ff0-bfd4-660adb11240c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.265824 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eda9f246-8001-4ff0-bfd4-660adb11240c" (UID: "eda9f246-8001-4ff0-bfd4-660adb11240c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.276620 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.276666 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda9f246-8001-4ff0-bfd4-660adb11240c-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.276684 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hckc4\" (UniqueName: \"kubernetes.io/projected/eda9f246-8001-4ff0-bfd4-660adb11240c-kube-api-access-hckc4\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.292258 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.341416 4644 generic.go:334] "Generic (PLEG): container finished" podID="eda9f246-8001-4ff0-bfd4-660adb11240c" containerID="da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef" exitCode=0 Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.341778 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.342832 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eda9f246-8001-4ff0-bfd4-660adb11240c","Type":"ContainerDied","Data":"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef"} Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.342861 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eda9f246-8001-4ff0-bfd4-660adb11240c","Type":"ContainerDied","Data":"431c910d145cfced994c415c09a23d4faac0ff551552f58e1e97072e32274801"} Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.342878 4644 scope.go:117] "RemoveContainer" containerID="da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.346194 4644 generic.go:334] "Generic (PLEG): container finished" podID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerID="ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a" exitCode=0 Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.346242 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerDied","Data":"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a"} Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.346286 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b96a776-d2b3-470b-aff3-559fc8afc17f","Type":"ContainerDied","Data":"288ba4f4f4aed280458fbd7f1575e6491634807c6b9345461b1bfec2bbcfa51a"} Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.346445 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.377084 4644 scope.go:117] "RemoveContainer" containerID="da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.378171 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs\") pod \"0b96a776-d2b3-470b-aff3-559fc8afc17f\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.378293 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs\") pod \"0b96a776-d2b3-470b-aff3-559fc8afc17f\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.378439 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q22kj\" (UniqueName: \"kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj\") pod \"0b96a776-d2b3-470b-aff3-559fc8afc17f\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.378648 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle\") pod \"0b96a776-d2b3-470b-aff3-559fc8afc17f\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.378809 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data\") pod \"0b96a776-d2b3-470b-aff3-559fc8afc17f\" (UID: \"0b96a776-d2b3-470b-aff3-559fc8afc17f\") " Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.378482 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef\": container with ID starting with da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef not found: ID does not exist" containerID="da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.379811 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef"} err="failed to get container status \"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef\": rpc error: code = NotFound desc = could not find container \"da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef\": container with ID starting with da07052ca438407d43e684433826b46ca40a0b0e3950d79c7d35ce8448e643ef not found: ID does not exist" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.379890 4644 scope.go:117] "RemoveContainer" containerID="ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.380627 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs" (OuterVolumeSpecName: "logs") pod "0b96a776-d2b3-470b-aff3-559fc8afc17f" (UID: "0b96a776-d2b3-470b-aff3-559fc8afc17f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.390734 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj" (OuterVolumeSpecName: "kube-api-access-q22kj") pod "0b96a776-d2b3-470b-aff3-559fc8afc17f" (UID: "0b96a776-d2b3-470b-aff3-559fc8afc17f"). InnerVolumeSpecName "kube-api-access-q22kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.435982 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.443717 4644 scope.go:117] "RemoveContainer" containerID="5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.453101 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.463351 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b96a776-d2b3-470b-aff3-559fc8afc17f" (UID: "0b96a776-d2b3-470b-aff3-559fc8afc17f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465047 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465509 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5c3a3a-8314-4a01-b164-038d9247e569" containerName="nova-manage" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465524 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5c3a3a-8314-4a01-b164-038d9247e569" containerName="nova-manage" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465534 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465541 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465550 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda9f246-8001-4ff0-bfd4-660adb11240c" containerName="nova-scheduler-scheduler" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465557 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda9f246-8001-4ff0-bfd4-660adb11240c" containerName="nova-scheduler-scheduler" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465568 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="init" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465574 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="init" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465596 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465601 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.465614 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="dnsmasq-dns" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465620 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="dnsmasq-dns" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465834 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-metadata" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465849 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" containerName="nova-metadata-log" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465862 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda9f246-8001-4ff0-bfd4-660adb11240c" containerName="nova-scheduler-scheduler" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465870 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5c3a3a-8314-4a01-b164-038d9247e569" containerName="nova-manage" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.465880 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="51698a4f-9e64-41ea-9130-c197b4505acb" containerName="dnsmasq-dns" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.466633 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.469409 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.472079 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b96a776-d2b3-470b-aff3-559fc8afc17f" (UID: "0b96a776-d2b3-470b-aff3-559fc8afc17f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.476288 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data" (OuterVolumeSpecName: "config-data") pod "0b96a776-d2b3-470b-aff3-559fc8afc17f" (UID: "0b96a776-d2b3-470b-aff3-559fc8afc17f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.482205 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.482233 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.482242 4644 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b96a776-d2b3-470b-aff3-559fc8afc17f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.482253 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b96a776-d2b3-470b-aff3-559fc8afc17f-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.482261 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q22kj\" (UniqueName: \"kubernetes.io/projected/0b96a776-d2b3-470b-aff3-559fc8afc17f-kube-api-access-q22kj\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.490993 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.503767 4644 scope.go:117] "RemoveContainer" containerID="ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.504225 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a\": container with ID starting with ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a not found: ID does not exist" containerID="ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.504265 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a"} err="failed to get container status \"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a\": rpc error: code = NotFound desc = could not find container \"ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a\": container with ID starting with ab67f5096f47ebb2c1469268bcff8ff04d272b2064f447f0ce0a792283decd8a not found: ID does not exist" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.504291 4644 scope.go:117] "RemoveContainer" containerID="5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752" Feb 04 09:03:42 crc kubenswrapper[4644]: E0204 09:03:42.504745 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752\": container with ID starting with 5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752 not found: ID does not exist" containerID="5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.504780 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752"} err="failed to get container status \"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752\": rpc error: code = NotFound desc = could not find container \"5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752\": container with ID starting with 5b56ad3ca9d1830f449dc90ef39f6a93297dcf6c472ae95797be269f72741752 not found: ID does not exist" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.584679 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ss2\" (UniqueName: \"kubernetes.io/projected/9decc8da-612f-4d8e-9ec7-b3894e3456f5-kube-api-access-f2ss2\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.585494 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-config-data\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.585589 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.671876 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda9f246-8001-4ff0-bfd4-660adb11240c" path="/var/lib/kubelet/pods/eda9f246-8001-4ff0-bfd4-660adb11240c/volumes" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.687814 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-config-data\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.687889 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.687978 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2ss2\" (UniqueName: \"kubernetes.io/projected/9decc8da-612f-4d8e-9ec7-b3894e3456f5-kube-api-access-f2ss2\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.694978 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.695022 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.696093 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9decc8da-612f-4d8e-9ec7-b3894e3456f5-config-data\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.728227 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.733649 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2ss2\" (UniqueName: \"kubernetes.io/projected/9decc8da-612f-4d8e-9ec7-b3894e3456f5-kube-api-access-f2ss2\") pod \"nova-scheduler-0\" (UID: \"9decc8da-612f-4d8e-9ec7-b3894e3456f5\") " pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.749007 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.752681 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.757709 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.761523 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.767945 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.792837 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.892256 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.892321 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-config-data\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.892384 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-logs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.892409 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvzq\" (UniqueName: \"kubernetes.io/projected/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-kube-api-access-7pvzq\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.892531 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.995799 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.996181 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.996223 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-config-data\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.996407 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-logs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.996430 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pvzq\" (UniqueName: \"kubernetes.io/projected/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-kube-api-access-7pvzq\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:42 crc kubenswrapper[4644]: I0204 09:03:42.997676 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-logs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.002999 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.004741 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.006801 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-config-data\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.013479 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pvzq\" (UniqueName: \"kubernetes.io/projected/bc0f95ed-7197-4f32-8d5c-7d9551d0f846-kube-api-access-7pvzq\") pod \"nova-metadata-0\" (UID: \"bc0f95ed-7197-4f32-8d5c-7d9551d0f846\") " pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.085976 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.250740 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 09:03:43 crc kubenswrapper[4644]: W0204 09:03:43.254595 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9decc8da_612f_4d8e_9ec7_b3894e3456f5.slice/crio-0cf2afe88aa347860cf8a358ca162e54dcd159819290a05a4823ecf54f755ec6 WatchSource:0}: Error finding container 0cf2afe88aa347860cf8a358ca162e54dcd159819290a05a4823ecf54f755ec6: Status 404 returned error can't find the container with id 0cf2afe88aa347860cf8a358ca162e54dcd159819290a05a4823ecf54f755ec6 Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.358364 4644 generic.go:334] "Generic (PLEG): container finished" podID="665b396a-9470-4142-af29-a1de2f961433" containerID="09b1294a05fa778de3acf6ef8efe8aae0b263ea1a2b383f62ff9dbbfb5efff9e" exitCode=0 Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.358467 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerDied","Data":"09b1294a05fa778de3acf6ef8efe8aae0b263ea1a2b383f62ff9dbbfb5efff9e"} Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.358803 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"665b396a-9470-4142-af29-a1de2f961433","Type":"ContainerDied","Data":"7cd0027a2e864661f01a8e5931f8e8132ce7ea5ddeb04f94e80990bd26c99d3a"} Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.358823 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd0027a2e864661f01a8e5931f8e8132ce7ea5ddeb04f94e80990bd26c99d3a" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.360125 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9decc8da-612f-4d8e-9ec7-b3894e3456f5","Type":"ContainerStarted","Data":"0cf2afe88aa347860cf8a358ca162e54dcd159819290a05a4823ecf54f755ec6"} Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.402839 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.505895 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.506314 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.506437 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.506527 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.506592 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.506628 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data\") pod \"665b396a-9470-4142-af29-a1de2f961433\" (UID: \"665b396a-9470-4142-af29-a1de2f961433\") " Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.508981 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs" (OuterVolumeSpecName: "logs") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.519468 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5" (OuterVolumeSpecName: "kube-api-access-5t8k5") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "kube-api-access-5t8k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.536857 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.538546 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data" (OuterVolumeSpecName: "config-data") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.612481 4644 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665b396a-9470-4142-af29-a1de2f961433-logs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.612521 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.612557 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/665b396a-9470-4142-af29-a1de2f961433-kube-api-access-5t8k5\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.612570 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.617358 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.621874 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "665b396a-9470-4142-af29-a1de2f961433" (UID: "665b396a-9470-4142-af29-a1de2f961433"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.642294 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 09:03:43 crc kubenswrapper[4644]: W0204 09:03:43.655030 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc0f95ed_7197_4f32_8d5c_7d9551d0f846.slice/crio-8a0a33c39e15aa8ceafd684c37a523123240ca1d5a01cb4011a36d03ca2ef746 WatchSource:0}: Error finding container 8a0a33c39e15aa8ceafd684c37a523123240ca1d5a01cb4011a36d03ca2ef746: Status 404 returned error can't find the container with id 8a0a33c39e15aa8ceafd684c37a523123240ca1d5a01cb4011a36d03ca2ef746 Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.713921 4644 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:43 crc kubenswrapper[4644]: I0204 09:03:43.714215 4644 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/665b396a-9470-4142-af29-a1de2f961433-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.371624 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9decc8da-612f-4d8e-9ec7-b3894e3456f5","Type":"ContainerStarted","Data":"31250206a3ec5dd1c21a53f7b10c02678f43cd5bda53d25ce79f0e349aa2b386"} Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.374054 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.381603 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc0f95ed-7197-4f32-8d5c-7d9551d0f846","Type":"ContainerStarted","Data":"641936bd48af87864d78903d2959a10d3c77c52063466ffb2c940f704be3479d"} Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.381839 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc0f95ed-7197-4f32-8d5c-7d9551d0f846","Type":"ContainerStarted","Data":"de07e714aecc27d48d14d46543728d0b1134b4dc5c0bcd5ca9b23672667c855f"} Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.381918 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc0f95ed-7197-4f32-8d5c-7d9551d0f846","Type":"ContainerStarted","Data":"8a0a33c39e15aa8ceafd684c37a523123240ca1d5a01cb4011a36d03ca2ef746"} Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.404582 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.404560408 podStartE2EDuration="2.404560408s" podCreationTimestamp="2026-02-04 09:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:44.399687506 +0000 UTC m=+1334.439745261" watchObservedRunningTime="2026-02-04 09:03:44.404560408 +0000 UTC m=+1334.444618163" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.429566 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.429541366 podStartE2EDuration="2.429541366s" podCreationTimestamp="2026-02-04 09:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:44.416306937 +0000 UTC m=+1334.456364722" watchObservedRunningTime="2026-02-04 09:03:44.429541366 +0000 UTC m=+1334.469599121" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.452071 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.464928 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.479422 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:44 crc kubenswrapper[4644]: E0204 09:03:44.479840 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-log" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.479860 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-log" Feb 04 09:03:44 crc kubenswrapper[4644]: E0204 09:03:44.479890 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-api" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.479897 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-api" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.480084 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-log" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.480101 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="665b396a-9470-4142-af29-a1de2f961433" containerName="nova-api-api" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.481072 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.483584 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.483712 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.483922 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.485285 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.631738 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-logs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.632341 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-public-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.632502 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-config-data\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.632667 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.632820 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlds\" (UniqueName: \"kubernetes.io/projected/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-kube-api-access-9dlds\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.632962 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.673596 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b96a776-d2b3-470b-aff3-559fc8afc17f" path="/var/lib/kubelet/pods/0b96a776-d2b3-470b-aff3-559fc8afc17f/volumes" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.696820 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665b396a-9470-4142-af29-a1de2f961433" path="/var/lib/kubelet/pods/665b396a-9470-4142-af29-a1de2f961433/volumes" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.734781 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.734938 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-logs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.735013 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-public-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.735084 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-config-data\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.735553 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-logs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.735812 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.735946 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlds\" (UniqueName: \"kubernetes.io/projected/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-kube-api-access-9dlds\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.739116 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.740056 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-config-data\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.741124 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.745645 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-public-tls-certs\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.758835 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlds\" (UniqueName: \"kubernetes.io/projected/96b76067-7c3f-44cb-8d2a-0bbb04035d9c-kube-api-access-9dlds\") pod \"nova-api-0\" (UID: \"96b76067-7c3f-44cb-8d2a-0bbb04035d9c\") " pod="openstack/nova-api-0" Feb 04 09:03:44 crc kubenswrapper[4644]: I0204 09:03:44.809304 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 09:03:45 crc kubenswrapper[4644]: I0204 09:03:45.256990 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 09:03:45 crc kubenswrapper[4644]: I0204 09:03:45.385821 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96b76067-7c3f-44cb-8d2a-0bbb04035d9c","Type":"ContainerStarted","Data":"0327cc562a789b5fc33827edd1f5091944e4bf09e34cce54a6a5143014a77e36"} Feb 04 09:03:46 crc kubenswrapper[4644]: I0204 09:03:46.395990 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96b76067-7c3f-44cb-8d2a-0bbb04035d9c","Type":"ContainerStarted","Data":"8ac57d6d58d37e359dd74e5061f494598f868a6ad0f6846b07f10f7f3daf143a"} Feb 04 09:03:46 crc kubenswrapper[4644]: I0204 09:03:46.396606 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96b76067-7c3f-44cb-8d2a-0bbb04035d9c","Type":"ContainerStarted","Data":"7087d57d36193bae643358f2216072e3a380be9a243b0d56fb38933cca5e8e4d"} Feb 04 09:03:46 crc kubenswrapper[4644]: I0204 09:03:46.418488 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.418471763 podStartE2EDuration="2.418471763s" podCreationTimestamp="2026-02-04 09:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:03:46.417588588 +0000 UTC m=+1336.457646343" watchObservedRunningTime="2026-02-04 09:03:46.418471763 +0000 UTC m=+1336.458529518" Feb 04 09:03:47 crc kubenswrapper[4644]: I0204 09:03:47.793979 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 09:03:48 crc kubenswrapper[4644]: I0204 09:03:48.087392 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:03:48 crc kubenswrapper[4644]: I0204 09:03:48.087753 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 09:03:52 crc kubenswrapper[4644]: I0204 09:03:52.793546 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 04 09:03:52 crc kubenswrapper[4644]: I0204 09:03:52.826650 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 04 09:03:53 crc kubenswrapper[4644]: I0204 09:03:53.087962 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 09:03:53 crc kubenswrapper[4644]: I0204 09:03:53.088387 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 09:03:53 crc kubenswrapper[4644]: I0204 09:03:53.514712 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 04 09:03:54 crc kubenswrapper[4644]: I0204 09:03:54.102548 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc0f95ed-7197-4f32-8d5c-7d9551d0f846" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:54 crc kubenswrapper[4644]: I0204 09:03:54.103855 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc0f95ed-7197-4f32-8d5c-7d9551d0f846" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:54 crc kubenswrapper[4644]: I0204 09:03:54.809757 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:54 crc kubenswrapper[4644]: I0204 09:03:54.809846 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 09:03:55 crc kubenswrapper[4644]: I0204 09:03:55.821621 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="96b76067-7c3f-44cb-8d2a-0bbb04035d9c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:55 crc kubenswrapper[4644]: I0204 09:03:55.821663 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="96b76067-7c3f-44cb-8d2a-0bbb04035d9c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 09:03:58 crc kubenswrapper[4644]: I0204 09:03:58.988638 4644 scope.go:117] "RemoveContainer" containerID="5a02b5a6c91e5d2c7a931d70fe94b6118cb29fcb86493b1411fd7f96520a428a" Feb 04 09:03:59 crc kubenswrapper[4644]: I0204 09:03:59.020633 4644 scope.go:117] "RemoveContainer" containerID="4ffc14471304be242059cd8c6c74093f200e5f50d92116b8e2e8104093fe4aa0" Feb 04 09:03:59 crc kubenswrapper[4644]: I0204 09:03:59.880046 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 09:04:03 crc kubenswrapper[4644]: I0204 09:04:03.092962 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 09:04:03 crc kubenswrapper[4644]: I0204 09:04:03.096355 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 09:04:03 crc kubenswrapper[4644]: I0204 09:04:03.100250 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 09:04:03 crc kubenswrapper[4644]: I0204 09:04:03.600840 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 09:04:04 crc kubenswrapper[4644]: I0204 09:04:04.816316 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 09:04:04 crc kubenswrapper[4644]: I0204 09:04:04.816768 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 09:04:04 crc kubenswrapper[4644]: I0204 09:04:04.817343 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 09:04:04 crc kubenswrapper[4644]: I0204 09:04:04.830869 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 09:04:05 crc kubenswrapper[4644]: I0204 09:04:05.555425 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:04:05 crc kubenswrapper[4644]: I0204 09:04:05.555512 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:04:05 crc kubenswrapper[4644]: I0204 09:04:05.624170 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 09:04:05 crc kubenswrapper[4644]: I0204 09:04:05.636148 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 09:04:13 crc kubenswrapper[4644]: I0204 09:04:13.357691 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:14 crc kubenswrapper[4644]: I0204 09:04:14.123822 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:18 crc kubenswrapper[4644]: I0204 09:04:18.113892 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" containerID="cri-o://f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa" gracePeriod=604796 Feb 04 09:04:18 crc kubenswrapper[4644]: I0204 09:04:18.675875 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="rabbitmq" containerID="cri-o://773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8" gracePeriod=604796 Feb 04 09:04:19 crc kubenswrapper[4644]: I0204 09:04:19.845022 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Feb 04 09:04:20 crc kubenswrapper[4644]: I0204 09:04:20.164038 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.674364 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.718892 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719013 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719036 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719060 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719086 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719157 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719175 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719219 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719233 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719260 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrt9q\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.719283 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd\") pod \"cfd19433-aab2-4d07-99e5-edee81956813\" (UID: \"cfd19433-aab2-4d07-99e5-edee81956813\") " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.726535 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.727073 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.752532 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.773493 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.773574 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.777884 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data" (OuterVolumeSpecName: "config-data") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.785494 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.792583 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q" (OuterVolumeSpecName: "kube-api-access-rrt9q") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "kube-api-access-rrt9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.793039 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info" (OuterVolumeSpecName: "pod-info") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.846880 4644 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.847281 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.847759 4644 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cfd19433-aab2-4d07-99e5-edee81956813-pod-info\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.847881 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.848057 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.848173 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.848414 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.848535 4644 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cfd19433-aab2-4d07-99e5-edee81956813-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.848651 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrt9q\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-kube-api-access-rrt9q\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.859402 4644 generic.go:334] "Generic (PLEG): container finished" podID="cfd19433-aab2-4d07-99e5-edee81956813" containerID="f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa" exitCode=0 Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.859478 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerDied","Data":"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa"} Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.859513 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cfd19433-aab2-4d07-99e5-edee81956813","Type":"ContainerDied","Data":"c7c35b2caf433f394fc693567e18422bb37067e89a1d3da7b7b5463036ea5f89"} Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.859534 4644 scope.go:117] "RemoveContainer" containerID="f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.859816 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.912466 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.940650 4644 scope.go:117] "RemoveContainer" containerID="55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.952565 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.969983 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf" (OuterVolumeSpecName: "server-conf") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.978592 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cfd19433-aab2-4d07-99e5-edee81956813" (UID: "cfd19433-aab2-4d07-99e5-edee81956813"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.984201 4644 scope.go:117] "RemoveContainer" containerID="f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa" Feb 04 09:04:24 crc kubenswrapper[4644]: E0204 09:04:24.984686 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa\": container with ID starting with f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa not found: ID does not exist" containerID="f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.984732 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa"} err="failed to get container status \"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa\": rpc error: code = NotFound desc = could not find container \"f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa\": container with ID starting with f407ae2255e701b35fbd409dafecb1cf487bae6e50cc6aca6cf542c5ffa7a1fa not found: ID does not exist" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.984761 4644 scope.go:117] "RemoveContainer" containerID="55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a" Feb 04 09:04:24 crc kubenswrapper[4644]: E0204 09:04:24.987525 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a\": container with ID starting with 55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a not found: ID does not exist" containerID="55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a" Feb 04 09:04:24 crc kubenswrapper[4644]: I0204 09:04:24.987568 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a"} err="failed to get container status \"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a\": rpc error: code = NotFound desc = could not find container \"55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a\": container with ID starting with 55f7f2cedf9f2dc9d7b77ee907f455defd316d052c88269f3c66e4c1401cf77a not found: ID does not exist" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.054634 4644 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cfd19433-aab2-4d07-99e5-edee81956813-server-conf\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.054675 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cfd19433-aab2-4d07-99e5-edee81956813-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.198418 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.208445 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.215908 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.231485 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:25 crc kubenswrapper[4644]: E0204 09:04:25.231846 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="setup-container" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.231861 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="setup-container" Feb 04 09:04:25 crc kubenswrapper[4644]: E0204 09:04:25.231890 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.231896 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: E0204 09:04:25.231906 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="setup-container" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.231911 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="setup-container" Feb 04 09:04:25 crc kubenswrapper[4644]: E0204 09:04:25.231922 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.231928 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.232087 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.232110 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd19433-aab2-4d07-99e5-edee81956813" containerName="rabbitmq" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.232981 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.243843 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.244206 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.245838 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tgb8m" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.246075 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.246178 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.258704 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.259072 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.279315 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.359502 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.359773 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.359852 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360005 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360129 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360288 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ph5j\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360359 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360418 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360463 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360495 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360520 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360535 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\" (UID: \"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d\") " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360747 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360796 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-config-data\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360841 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360896 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360915 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrvl\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-kube-api-access-dnrvl\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360945 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360974 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.360995 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.361044 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.361083 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.361109 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.361171 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.361493 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.366226 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.367925 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.368125 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.369505 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j" (OuterVolumeSpecName: "kube-api-access-8ph5j") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "kube-api-access-8ph5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.370429 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info" (OuterVolumeSpecName: "pod-info") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.370870 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.414523 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data" (OuterVolumeSpecName: "config-data") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.439769 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf" (OuterVolumeSpecName: "server-conf") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463010 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-config-data\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463082 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463126 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463145 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrvl\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-kube-api-access-dnrvl\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463175 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463209 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463227 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463271 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463311 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463347 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463383 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463437 4644 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463447 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463465 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463476 4644 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463486 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463496 4644 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463507 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ph5j\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-kube-api-access-8ph5j\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463516 4644 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463525 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.463838 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-config-data\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.464101 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.465211 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.465604 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.467522 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.469906 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.471635 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.479370 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.483452 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrvl\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-kube-api-access-dnrvl\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.489820 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.490789 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca7a0ec9-ff74-4989-b66e-29bfc47bc73d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.494877 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.531519 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d\") " pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.560854 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.565955 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.579686 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" (UID: "4c8d6805-7c94-4d37-94c6-f3c2331cfc2d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:25 crc kubenswrapper[4644]: I0204 09:04:25.668478 4644 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.871770 4644 generic.go:334] "Generic (PLEG): container finished" podID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" containerID="773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8" exitCode=0 Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.871855 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerDied","Data":"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8"} Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.871882 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8d6805-7c94-4d37-94c6-f3c2331cfc2d","Type":"ContainerDied","Data":"327a530cc5033a4f62d47a8548278095c453566f0986737c95124b2da8826a33"} Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.871899 4644 scope.go:117] "RemoveContainer" containerID="773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.871974 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.929480 4644 scope.go:117] "RemoveContainer" containerID="476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.938135 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.969388 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.997732 4644 scope.go:117] "RemoveContainer" containerID="773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8" Feb 04 09:04:26 crc kubenswrapper[4644]: E0204 09:04:25.998397 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8\": container with ID starting with 773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8 not found: ID does not exist" containerID="773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.998431 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8"} err="failed to get container status \"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8\": rpc error: code = NotFound desc = could not find container \"773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8\": container with ID starting with 773af1f1b7846d8c1b618a8fdeb081326be3ecaed75e21b52f0b8d235bb350d8 not found: ID does not exist" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.998460 4644 scope.go:117] "RemoveContainer" containerID="476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407" Feb 04 09:04:26 crc kubenswrapper[4644]: E0204 09:04:25.998685 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407\": container with ID starting with 476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407 not found: ID does not exist" containerID="476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:25.998705 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407"} err="failed to get container status \"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407\": rpc error: code = NotFound desc = could not find container \"476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407\": container with ID starting with 476c3b5afbd1bbb2a54d82ba89febe946b8c332f1976574c446aab96bda77407 not found: ID does not exist" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.013598 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.015413 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026021 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026262 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026380 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026489 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h8zm5" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026637 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026760 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.026883 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.033892 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.102769 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.176177 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.177693 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178256 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178316 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5hm\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-kube-api-access-7v5hm\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178406 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178428 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178466 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178481 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.178504 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ccc5a46e-238d-43d7-9d48-311b21c76326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.184516 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.184640 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.184673 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.184706 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ccc5a46e-238d-43d7-9d48-311b21c76326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.195234 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.197433 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.286190 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.286240 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.286289 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.286615 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.287062 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.289590 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ccc5a46e-238d-43d7-9d48-311b21c76326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.289626 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn9h\" (UniqueName: \"kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.289663 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.289710 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.289748 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.291825 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ccc5a46e-238d-43d7-9d48-311b21c76326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.291893 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.291960 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.291989 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.290096 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.290102 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.290571 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ccc5a46e-238d-43d7-9d48-311b21c76326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.292954 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296491 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296542 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296707 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296803 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v5hm\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-kube-api-access-7v5hm\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296853 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.296873 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.297648 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.297971 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ccc5a46e-238d-43d7-9d48-311b21c76326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.299148 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ccc5a46e-238d-43d7-9d48-311b21c76326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.300838 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.311058 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v5hm\" (UniqueName: \"kubernetes.io/projected/ccc5a46e-238d-43d7-9d48-311b21c76326-kube-api-access-7v5hm\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.314483 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ccc5a46e-238d-43d7-9d48-311b21c76326\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.361723 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.398988 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399045 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399071 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399103 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399129 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399189 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.399209 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxn9h\" (UniqueName: \"kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.400343 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.400864 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.401265 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.401379 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.402002 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.402812 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.419420 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxn9h\" (UniqueName: \"kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h\") pod \"dnsmasq-dns-d558885bc-zgs49\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.519163 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.691680 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8d6805-7c94-4d37-94c6-f3c2331cfc2d" path="/var/lib/kubelet/pods/4c8d6805-7c94-4d37-94c6-f3c2331cfc2d/volumes" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.699005 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd19433-aab2-4d07-99e5-edee81956813" path="/var/lib/kubelet/pods/cfd19433-aab2-4d07-99e5-edee81956813/volumes" Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.818376 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.877639 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:26 crc kubenswrapper[4644]: W0204 09:04:26.887460 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13cb62fd_7b8e_478c_a868_d9f3858b1679.slice/crio-17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4 WatchSource:0}: Error finding container 17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4: Status 404 returned error can't find the container with id 17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4 Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.894041 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ccc5a46e-238d-43d7-9d48-311b21c76326","Type":"ContainerStarted","Data":"0c925c2e3cea46612e75fef3440380671200311033d4a94a849c69f34ca06cbb"} Feb 04 09:04:26 crc kubenswrapper[4644]: I0204 09:04:26.896012 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d","Type":"ContainerStarted","Data":"29f9920271a2942dfc04d6960f541732389dd5c44c5aaa120228707074ba03ff"} Feb 04 09:04:27 crc kubenswrapper[4644]: I0204 09:04:27.910150 4644 generic.go:334] "Generic (PLEG): container finished" podID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerID="b4ea2edce7481898ede27f2e0632b2493e34a218426087dca53f773598f2528c" exitCode=0 Feb 04 09:04:27 crc kubenswrapper[4644]: I0204 09:04:27.910229 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zgs49" event={"ID":"13cb62fd-7b8e-478c-a868-d9f3858b1679","Type":"ContainerDied","Data":"b4ea2edce7481898ede27f2e0632b2493e34a218426087dca53f773598f2528c"} Feb 04 09:04:27 crc kubenswrapper[4644]: I0204 09:04:27.910927 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zgs49" event={"ID":"13cb62fd-7b8e-478c-a868-d9f3858b1679","Type":"ContainerStarted","Data":"17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4"} Feb 04 09:04:27 crc kubenswrapper[4644]: I0204 09:04:27.913219 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d","Type":"ContainerStarted","Data":"01c7e1acbed82dcecbbbf3e469fba3f371abf3014bd492e5094f044ecb181438"} Feb 04 09:04:28 crc kubenswrapper[4644]: I0204 09:04:28.923447 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ccc5a46e-238d-43d7-9d48-311b21c76326","Type":"ContainerStarted","Data":"4cfb01a177f94dd6668e54c28b2f408c5762ac2aa45b42c2164f5e04143eaf18"} Feb 04 09:04:28 crc kubenswrapper[4644]: I0204 09:04:28.926105 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zgs49" event={"ID":"13cb62fd-7b8e-478c-a868-d9f3858b1679","Type":"ContainerStarted","Data":"670c2e5f4ed96c187b0beb28f86190ee1f82d5b48e3b119af7d03ec908f58abf"} Feb 04 09:04:28 crc kubenswrapper[4644]: I0204 09:04:28.985196 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-zgs49" podStartSLOduration=2.9851732589999997 podStartE2EDuration="2.985173259s" podCreationTimestamp="2026-02-04 09:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:04:28.975234719 +0000 UTC m=+1379.015292474" watchObservedRunningTime="2026-02-04 09:04:28.985173259 +0000 UTC m=+1379.025231024" Feb 04 09:04:29 crc kubenswrapper[4644]: I0204 09:04:29.937072 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:35 crc kubenswrapper[4644]: I0204 09:04:35.555773 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:04:35 crc kubenswrapper[4644]: I0204 09:04:35.556224 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:04:36 crc kubenswrapper[4644]: I0204 09:04:36.521537 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:36 crc kubenswrapper[4644]: I0204 09:04:36.603520 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:04:36 crc kubenswrapper[4644]: I0204 09:04:36.603748 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="dnsmasq-dns" containerID="cri-o://84608fa017f5a390514c855df5d03567d32bc3e1a6e579c5c9a7352c1505d86b" gracePeriod=10 Feb 04 09:04:36 crc kubenswrapper[4644]: I0204 09:04:36.983724 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-fk6fk"] Feb 04 09:04:36 crc kubenswrapper[4644]: I0204 09:04:36.985548 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.002724 4644 generic.go:334] "Generic (PLEG): container finished" podID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerID="84608fa017f5a390514c855df5d03567d32bc3e1a6e579c5c9a7352c1505d86b" exitCode=0 Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.002877 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" event={"ID":"63d82fcd-bb75-4881-b500-a77feab77cfc","Type":"ContainerDied","Data":"84608fa017f5a390514c855df5d03567d32bc3e1a6e579c5c9a7352c1505d86b"} Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.016594 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-fk6fk"] Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028238 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028367 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkhw\" (UniqueName: \"kubernetes.io/projected/a581143f-dc8c-4226-a36c-5ece09be2e6f-kube-api-access-8xkhw\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028432 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028453 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028539 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-config\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028715 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.028875 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133096 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133217 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xkhw\" (UniqueName: \"kubernetes.io/projected/a581143f-dc8c-4226-a36c-5ece09be2e6f-kube-api-access-8xkhw\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133260 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133284 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133350 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-config\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133397 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.133437 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.134762 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.135159 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.135763 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.135927 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.135984 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-config\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.136092 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a581143f-dc8c-4226-a36c-5ece09be2e6f-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.159300 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xkhw\" (UniqueName: \"kubernetes.io/projected/a581143f-dc8c-4226-a36c-5ece09be2e6f-kube-api-access-8xkhw\") pod \"dnsmasq-dns-67cb876dc9-fk6fk\" (UID: \"a581143f-dc8c-4226-a36c-5ece09be2e6f\") " pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.306826 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.431816 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540032 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540080 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540161 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prwf9\" (UniqueName: \"kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540194 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540270 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.540309 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config\") pod \"63d82fcd-bb75-4881-b500-a77feab77cfc\" (UID: \"63d82fcd-bb75-4881-b500-a77feab77cfc\") " Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.549562 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9" (OuterVolumeSpecName: "kube-api-access-prwf9") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "kube-api-access-prwf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.599526 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.599543 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.604103 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.607404 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.636526 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config" (OuterVolumeSpecName: "config") pod "63d82fcd-bb75-4881-b500-a77feab77cfc" (UID: "63d82fcd-bb75-4881-b500-a77feab77cfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647011 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prwf9\" (UniqueName: \"kubernetes.io/projected/63d82fcd-bb75-4881-b500-a77feab77cfc-kube-api-access-prwf9\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647042 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647052 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647060 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647069 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.647077 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63d82fcd-bb75-4881-b500-a77feab77cfc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:37 crc kubenswrapper[4644]: I0204 09:04:37.793050 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-fk6fk"] Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.011361 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" event={"ID":"a581143f-dc8c-4226-a36c-5ece09be2e6f","Type":"ContainerStarted","Data":"3c25c831954aef941f96634e9c4859c8bd4b6910bf827b8ae4f6c098b2bf7719"} Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.013797 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" event={"ID":"63d82fcd-bb75-4881-b500-a77feab77cfc","Type":"ContainerDied","Data":"29f3f4c3f370ad013304a5fdb2c6cec69716c672bdfe70da106b8034289055b9"} Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.013846 4644 scope.go:117] "RemoveContainer" containerID="84608fa017f5a390514c855df5d03567d32bc3e1a6e579c5c9a7352c1505d86b" Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.014010 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c6mnd" Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.048943 4644 scope.go:117] "RemoveContainer" containerID="0987936515dd3bba8336abc4f2143e4de489a0c0e57eb1384700a2431bd88988" Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.071499 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.089140 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c6mnd"] Feb 04 09:04:38 crc kubenswrapper[4644]: I0204 09:04:38.669305 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" path="/var/lib/kubelet/pods/63d82fcd-bb75-4881-b500-a77feab77cfc/volumes" Feb 04 09:04:39 crc kubenswrapper[4644]: I0204 09:04:39.025024 4644 generic.go:334] "Generic (PLEG): container finished" podID="a581143f-dc8c-4226-a36c-5ece09be2e6f" containerID="153c42c7ed46f3d061d054fd5ee7ce568b2de6bed11a75fc2abe3ef74b64e0e8" exitCode=0 Feb 04 09:04:39 crc kubenswrapper[4644]: I0204 09:04:39.025213 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" event={"ID":"a581143f-dc8c-4226-a36c-5ece09be2e6f","Type":"ContainerDied","Data":"153c42c7ed46f3d061d054fd5ee7ce568b2de6bed11a75fc2abe3ef74b64e0e8"} Feb 04 09:04:40 crc kubenswrapper[4644]: I0204 09:04:40.034544 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" event={"ID":"a581143f-dc8c-4226-a36c-5ece09be2e6f","Type":"ContainerStarted","Data":"1ee1707e092c2e66fb34474c9d6d98f89cb70e1da36acbcd441d15d32696d500"} Feb 04 09:04:40 crc kubenswrapper[4644]: I0204 09:04:40.035940 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:40 crc kubenswrapper[4644]: I0204 09:04:40.061476 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" podStartSLOduration=4.061457917 podStartE2EDuration="4.061457917s" podCreationTimestamp="2026-02-04 09:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:04:40.05531715 +0000 UTC m=+1390.095374895" watchObservedRunningTime="2026-02-04 09:04:40.061457917 +0000 UTC m=+1390.101515672" Feb 04 09:04:47 crc kubenswrapper[4644]: I0204 09:04:47.308098 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cb876dc9-fk6fk" Feb 04 09:04:47 crc kubenswrapper[4644]: I0204 09:04:47.395822 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:47 crc kubenswrapper[4644]: I0204 09:04:47.396086 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-zgs49" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="dnsmasq-dns" containerID="cri-o://670c2e5f4ed96c187b0beb28f86190ee1f82d5b48e3b119af7d03ec908f58abf" gracePeriod=10 Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.121243 4644 generic.go:334] "Generic (PLEG): container finished" podID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerID="670c2e5f4ed96c187b0beb28f86190ee1f82d5b48e3b119af7d03ec908f58abf" exitCode=0 Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.121316 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zgs49" event={"ID":"13cb62fd-7b8e-478c-a868-d9f3858b1679","Type":"ContainerDied","Data":"670c2e5f4ed96c187b0beb28f86190ee1f82d5b48e3b119af7d03ec908f58abf"} Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.121546 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-zgs49" event={"ID":"13cb62fd-7b8e-478c-a868-d9f3858b1679","Type":"ContainerDied","Data":"17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4"} Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.121562 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17bc6f1be937aee34fde14ef1e7e3cf746e80c16a6009e215069f153e4bd9ed4" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.156299 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308584 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308648 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308684 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308915 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308949 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.308989 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.309011 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxn9h\" (UniqueName: \"kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h\") pod \"13cb62fd-7b8e-478c-a868-d9f3858b1679\" (UID: \"13cb62fd-7b8e-478c-a868-d9f3858b1679\") " Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.317521 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h" (OuterVolumeSpecName: "kube-api-access-lxn9h") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "kube-api-access-lxn9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.364484 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config" (OuterVolumeSpecName: "config") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.365712 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.376813 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.386834 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.389860 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.390967 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13cb62fd-7b8e-478c-a868-d9f3858b1679" (UID: "13cb62fd-7b8e-478c-a868-d9f3858b1679"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411536 4644 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411627 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxn9h\" (UniqueName: \"kubernetes.io/projected/13cb62fd-7b8e-478c-a868-d9f3858b1679-kube-api-access-lxn9h\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411680 4644 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411742 4644 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411796 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411844 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:48 crc kubenswrapper[4644]: I0204 09:04:48.411892 4644 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb62fd-7b8e-478c-a868-d9f3858b1679-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 09:04:49 crc kubenswrapper[4644]: I0204 09:04:49.129706 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-zgs49" Feb 04 09:04:49 crc kubenswrapper[4644]: I0204 09:04:49.155791 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:49 crc kubenswrapper[4644]: I0204 09:04:49.164812 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-zgs49"] Feb 04 09:04:50 crc kubenswrapper[4644]: I0204 09:04:50.677956 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" path="/var/lib/kubelet/pods/13cb62fd-7b8e-478c-a868-d9f3858b1679/volumes" Feb 04 09:04:59 crc kubenswrapper[4644]: I0204 09:04:59.272194 4644 scope.go:117] "RemoveContainer" containerID="84b1b24d4e3257362d9453790dc80b4e57314ce6e6114db8d3f0048790cc2165" Feb 04 09:05:00 crc kubenswrapper[4644]: I0204 09:05:00.255765 4644 generic.go:334] "Generic (PLEG): container finished" podID="ca7a0ec9-ff74-4989-b66e-29bfc47bc73d" containerID="01c7e1acbed82dcecbbbf3e469fba3f371abf3014bd492e5094f044ecb181438" exitCode=0 Feb 04 09:05:00 crc kubenswrapper[4644]: I0204 09:05:00.255858 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d","Type":"ContainerDied","Data":"01c7e1acbed82dcecbbbf3e469fba3f371abf3014bd492e5094f044ecb181438"} Feb 04 09:05:01 crc kubenswrapper[4644]: I0204 09:05:01.266394 4644 generic.go:334] "Generic (PLEG): container finished" podID="ccc5a46e-238d-43d7-9d48-311b21c76326" containerID="4cfb01a177f94dd6668e54c28b2f408c5762ac2aa45b42c2164f5e04143eaf18" exitCode=0 Feb 04 09:05:01 crc kubenswrapper[4644]: I0204 09:05:01.266430 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ccc5a46e-238d-43d7-9d48-311b21c76326","Type":"ContainerDied","Data":"4cfb01a177f94dd6668e54c28b2f408c5762ac2aa45b42c2164f5e04143eaf18"} Feb 04 09:05:01 crc kubenswrapper[4644]: I0204 09:05:01.268490 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ca7a0ec9-ff74-4989-b66e-29bfc47bc73d","Type":"ContainerStarted","Data":"e2dbae67ad5dc134416b300961cd887aa5d24978f489f9b6de4012487ad60138"} Feb 04 09:05:01 crc kubenswrapper[4644]: I0204 09:05:01.268666 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 04 09:05:01 crc kubenswrapper[4644]: I0204 09:05:01.398008 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.397989053 podStartE2EDuration="36.397989053s" podCreationTimestamp="2026-02-04 09:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:05:01.375996416 +0000 UTC m=+1411.416054191" watchObservedRunningTime="2026-02-04 09:05:01.397989053 +0000 UTC m=+1411.438046798" Feb 04 09:05:02 crc kubenswrapper[4644]: I0204 09:05:02.281074 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ccc5a46e-238d-43d7-9d48-311b21c76326","Type":"ContainerStarted","Data":"3bb20fdd941a0e28439384b6b6360def672fe30ae3eda623ced71d30e817ea18"} Feb 04 09:05:02 crc kubenswrapper[4644]: I0204 09:05:02.281907 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:05:02 crc kubenswrapper[4644]: I0204 09:05:02.311586 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.311561852 podStartE2EDuration="37.311561852s" podCreationTimestamp="2026-02-04 09:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:05:02.30301228 +0000 UTC m=+1412.343070055" watchObservedRunningTime="2026-02-04 09:05:02.311561852 +0000 UTC m=+1412.351619607" Feb 04 09:05:05 crc kubenswrapper[4644]: I0204 09:05:05.555587 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:05:05 crc kubenswrapper[4644]: I0204 09:05:05.556229 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:05:05 crc kubenswrapper[4644]: I0204 09:05:05.556294 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:05:05 crc kubenswrapper[4644]: I0204 09:05:05.557268 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:05:05 crc kubenswrapper[4644]: I0204 09:05:05.557397 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf" gracePeriod=600 Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.255471 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw"] Feb 04 09:05:06 crc kubenswrapper[4644]: E0204 09:05:06.256272 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256293 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: E0204 09:05:06.256322 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="init" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256347 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="init" Feb 04 09:05:06 crc kubenswrapper[4644]: E0204 09:05:06.256371 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256379 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: E0204 09:05:06.256399 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="init" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256406 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="init" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256660 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d82fcd-bb75-4881-b500-a77feab77cfc" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.256691 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="13cb62fd-7b8e-478c-a868-d9f3858b1679" containerName="dnsmasq-dns" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.257466 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.260019 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.260053 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.260462 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.262109 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.312056 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw"] Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.323427 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf" exitCode=0 Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.323481 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf"} Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.323508 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982"} Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.323525 4644 scope.go:117] "RemoveContainer" containerID="36f72411266c61400b63aa036f1c2b9650e9b73d1bad4f669e237a3c8534406d" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.409549 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.409618 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.410136 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7wg\" (UniqueName: \"kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.410202 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.512556 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7wg\" (UniqueName: \"kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.512614 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.512777 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.512828 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.519100 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.531657 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.531805 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.538421 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7wg\" (UniqueName: \"kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:06 crc kubenswrapper[4644]: I0204 09:05:06.576216 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:07 crc kubenswrapper[4644]: I0204 09:05:07.253003 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw"] Feb 04 09:05:07 crc kubenswrapper[4644]: W0204 09:05:07.264313 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod156d5fb6_7e66_4c46_b846_26d3344b8f05.slice/crio-b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19 WatchSource:0}: Error finding container b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19: Status 404 returned error can't find the container with id b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19 Feb 04 09:05:07 crc kubenswrapper[4644]: I0204 09:05:07.334752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" event={"ID":"156d5fb6-7e66-4c46-b846-26d3344b8f05","Type":"ContainerStarted","Data":"b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19"} Feb 04 09:05:15 crc kubenswrapper[4644]: I0204 09:05:15.565147 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 04 09:05:16 crc kubenswrapper[4644]: I0204 09:05:16.368503 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 04 09:05:20 crc kubenswrapper[4644]: I0204 09:05:20.503035 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" event={"ID":"156d5fb6-7e66-4c46-b846-26d3344b8f05","Type":"ContainerStarted","Data":"951b5745bcf440a5b94c4157de5096f1b77745cf1f5675573e4e66dd5491c21f"} Feb 04 09:05:20 crc kubenswrapper[4644]: I0204 09:05:20.531908 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" podStartSLOduration=2.18750536 podStartE2EDuration="14.531887414s" podCreationTimestamp="2026-02-04 09:05:06 +0000 UTC" firstStartedPulling="2026-02-04 09:05:07.267154696 +0000 UTC m=+1417.307212451" lastFinishedPulling="2026-02-04 09:05:19.61153675 +0000 UTC m=+1429.651594505" observedRunningTime="2026-02-04 09:05:20.52216656 +0000 UTC m=+1430.562224335" watchObservedRunningTime="2026-02-04 09:05:20.531887414 +0000 UTC m=+1430.571945169" Feb 04 09:05:34 crc kubenswrapper[4644]: I0204 09:05:34.638005 4644 generic.go:334] "Generic (PLEG): container finished" podID="156d5fb6-7e66-4c46-b846-26d3344b8f05" containerID="951b5745bcf440a5b94c4157de5096f1b77745cf1f5675573e4e66dd5491c21f" exitCode=0 Feb 04 09:05:34 crc kubenswrapper[4644]: I0204 09:05:34.638069 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" event={"ID":"156d5fb6-7e66-4c46-b846-26d3344b8f05","Type":"ContainerDied","Data":"951b5745bcf440a5b94c4157de5096f1b77745cf1f5675573e4e66dd5491c21f"} Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.119089 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.235743 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7wg\" (UniqueName: \"kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg\") pod \"156d5fb6-7e66-4c46-b846-26d3344b8f05\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.235804 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory\") pod \"156d5fb6-7e66-4c46-b846-26d3344b8f05\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.235897 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam\") pod \"156d5fb6-7e66-4c46-b846-26d3344b8f05\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.235980 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle\") pod \"156d5fb6-7e66-4c46-b846-26d3344b8f05\" (UID: \"156d5fb6-7e66-4c46-b846-26d3344b8f05\") " Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.244623 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg" (OuterVolumeSpecName: "kube-api-access-xl7wg") pod "156d5fb6-7e66-4c46-b846-26d3344b8f05" (UID: "156d5fb6-7e66-4c46-b846-26d3344b8f05"). InnerVolumeSpecName "kube-api-access-xl7wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.246681 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "156d5fb6-7e66-4c46-b846-26d3344b8f05" (UID: "156d5fb6-7e66-4c46-b846-26d3344b8f05"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.269902 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "156d5fb6-7e66-4c46-b846-26d3344b8f05" (UID: "156d5fb6-7e66-4c46-b846-26d3344b8f05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.276112 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory" (OuterVolumeSpecName: "inventory") pod "156d5fb6-7e66-4c46-b846-26d3344b8f05" (UID: "156d5fb6-7e66-4c46-b846-26d3344b8f05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.339190 4644 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.339247 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl7wg\" (UniqueName: \"kubernetes.io/projected/156d5fb6-7e66-4c46-b846-26d3344b8f05-kube-api-access-xl7wg\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.339268 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.339288 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/156d5fb6-7e66-4c46-b846-26d3344b8f05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.661031 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.673137 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw" event={"ID":"156d5fb6-7e66-4c46-b846-26d3344b8f05","Type":"ContainerDied","Data":"b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19"} Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.673189 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80b4c90a6e2a213f3f2d1cd0a3d78aa32f08520dbf3f202e94172add4fd9f19" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.825076 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s"] Feb 04 09:05:36 crc kubenswrapper[4644]: E0204 09:05:36.825662 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156d5fb6-7e66-4c46-b846-26d3344b8f05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.825698 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="156d5fb6-7e66-4c46-b846-26d3344b8f05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.826016 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="156d5fb6-7e66-4c46-b846-26d3344b8f05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.828165 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.833067 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.835261 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.835302 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.836354 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.883174 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s"] Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.956983 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.957108 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:36 crc kubenswrapper[4644]: I0204 09:05:36.957440 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcbg\" (UniqueName: \"kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.059282 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.059446 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxcbg\" (UniqueName: \"kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.059590 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.064855 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.071596 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.078055 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxcbg\" (UniqueName: \"kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rjb5s\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.175045 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:37 crc kubenswrapper[4644]: I0204 09:05:37.717519 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s"] Feb 04 09:05:38 crc kubenswrapper[4644]: I0204 09:05:38.691124 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" event={"ID":"a2f175cb-68ae-4aa4-ad16-193a42aa579d","Type":"ContainerStarted","Data":"85d2ee884f56f6e38d3ee25db9fb7ee7769d13c110b697d98a4fe5dac756ae5b"} Feb 04 09:05:38 crc kubenswrapper[4644]: I0204 09:05:38.692770 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" event={"ID":"a2f175cb-68ae-4aa4-ad16-193a42aa579d","Type":"ContainerStarted","Data":"c5cf9e74e92863c6b720365ecbeb69276bd541b4ac34316d79b17c457e80fcd7"} Feb 04 09:05:38 crc kubenswrapper[4644]: I0204 09:05:38.713382 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" podStartSLOduration=2.322897332 podStartE2EDuration="2.713359382s" podCreationTimestamp="2026-02-04 09:05:36 +0000 UTC" firstStartedPulling="2026-02-04 09:05:37.721201871 +0000 UTC m=+1447.761259646" lastFinishedPulling="2026-02-04 09:05:38.111663921 +0000 UTC m=+1448.151721696" observedRunningTime="2026-02-04 09:05:38.704294076 +0000 UTC m=+1448.744351841" watchObservedRunningTime="2026-02-04 09:05:38.713359382 +0000 UTC m=+1448.753417137" Feb 04 09:05:41 crc kubenswrapper[4644]: I0204 09:05:41.719562 4644 generic.go:334] "Generic (PLEG): container finished" podID="a2f175cb-68ae-4aa4-ad16-193a42aa579d" containerID="85d2ee884f56f6e38d3ee25db9fb7ee7769d13c110b697d98a4fe5dac756ae5b" exitCode=0 Feb 04 09:05:41 crc kubenswrapper[4644]: I0204 09:05:41.719670 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" event={"ID":"a2f175cb-68ae-4aa4-ad16-193a42aa579d","Type":"ContainerDied","Data":"85d2ee884f56f6e38d3ee25db9fb7ee7769d13c110b697d98a4fe5dac756ae5b"} Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.146388 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.287405 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam\") pod \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.287796 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxcbg\" (UniqueName: \"kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg\") pod \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.288051 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory\") pod \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\" (UID: \"a2f175cb-68ae-4aa4-ad16-193a42aa579d\") " Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.296113 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg" (OuterVolumeSpecName: "kube-api-access-sxcbg") pod "a2f175cb-68ae-4aa4-ad16-193a42aa579d" (UID: "a2f175cb-68ae-4aa4-ad16-193a42aa579d"). InnerVolumeSpecName "kube-api-access-sxcbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.326409 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2f175cb-68ae-4aa4-ad16-193a42aa579d" (UID: "a2f175cb-68ae-4aa4-ad16-193a42aa579d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.335296 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory" (OuterVolumeSpecName: "inventory") pod "a2f175cb-68ae-4aa4-ad16-193a42aa579d" (UID: "a2f175cb-68ae-4aa4-ad16-193a42aa579d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.390819 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxcbg\" (UniqueName: \"kubernetes.io/projected/a2f175cb-68ae-4aa4-ad16-193a42aa579d-kube-api-access-sxcbg\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.391040 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.391051 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2f175cb-68ae-4aa4-ad16-193a42aa579d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.738749 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" event={"ID":"a2f175cb-68ae-4aa4-ad16-193a42aa579d","Type":"ContainerDied","Data":"c5cf9e74e92863c6b720365ecbeb69276bd541b4ac34316d79b17c457e80fcd7"} Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.738790 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cf9e74e92863c6b720365ecbeb69276bd541b4ac34316d79b17c457e80fcd7" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.738923 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rjb5s" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.826816 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4"] Feb 04 09:05:43 crc kubenswrapper[4644]: E0204 09:05:43.827207 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f175cb-68ae-4aa4-ad16-193a42aa579d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.827222 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f175cb-68ae-4aa4-ad16-193a42aa579d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.827453 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f175cb-68ae-4aa4-ad16-193a42aa579d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.829833 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.833393 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4"] Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.836140 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.836449 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.836901 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.837006 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.900695 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.900758 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slm5z\" (UniqueName: \"kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.900827 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:43 crc kubenswrapper[4644]: I0204 09:05:43.901123 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.003536 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.003658 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.003694 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slm5z\" (UniqueName: \"kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.003763 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.011078 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.011811 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.019555 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.025066 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slm5z\" (UniqueName: \"kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.198185 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:05:44 crc kubenswrapper[4644]: W0204 09:05:44.739948 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308d165a_5458_4e82_936c_b7a25ebfcbe6.slice/crio-ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28 WatchSource:0}: Error finding container ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28: Status 404 returned error can't find the container with id ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28 Feb 04 09:05:44 crc kubenswrapper[4644]: I0204 09:05:44.743515 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4"] Feb 04 09:05:45 crc kubenswrapper[4644]: I0204 09:05:45.761116 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" event={"ID":"308d165a-5458-4e82-936c-b7a25ebfcbe6","Type":"ContainerStarted","Data":"162c84a89b192ea944ca55feaa3d4b2d8895ae8d5a36e6e1048cb58bb97573b7"} Feb 04 09:05:45 crc kubenswrapper[4644]: I0204 09:05:45.761499 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" event={"ID":"308d165a-5458-4e82-936c-b7a25ebfcbe6","Type":"ContainerStarted","Data":"ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28"} Feb 04 09:05:45 crc kubenswrapper[4644]: I0204 09:05:45.790573 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" podStartSLOduration=2.372628405 podStartE2EDuration="2.79055343s" podCreationTimestamp="2026-02-04 09:05:43 +0000 UTC" firstStartedPulling="2026-02-04 09:05:44.742097413 +0000 UTC m=+1454.782155168" lastFinishedPulling="2026-02-04 09:05:45.160022428 +0000 UTC m=+1455.200080193" observedRunningTime="2026-02-04 09:05:45.782117181 +0000 UTC m=+1455.822174976" watchObservedRunningTime="2026-02-04 09:05:45.79055343 +0000 UTC m=+1455.830611185" Feb 04 09:05:59 crc kubenswrapper[4644]: I0204 09:05:59.434279 4644 scope.go:117] "RemoveContainer" containerID="5884dd8335eb971ba80b017c80a1aac95b7916a46d83af9c1d0da691a592df56" Feb 04 09:05:59 crc kubenswrapper[4644]: I0204 09:05:59.462279 4644 scope.go:117] "RemoveContainer" containerID="1b6c8806b58f224f4e456fbcab25c36e534f86ff01340ba639953f271bdcb6af" Feb 04 09:06:59 crc kubenswrapper[4644]: I0204 09:06:59.545237 4644 scope.go:117] "RemoveContainer" containerID="d8c455e6f38e0c6cccedfd565d04298b816fbc51955721c6d55de9cfa513b021" Feb 04 09:06:59 crc kubenswrapper[4644]: I0204 09:06:59.574123 4644 scope.go:117] "RemoveContainer" containerID="fa849767dc0fad0ce5a803017dabf754eac706e29f9879429800565872437189" Feb 04 09:06:59 crc kubenswrapper[4644]: I0204 09:06:59.606834 4644 scope.go:117] "RemoveContainer" containerID="1dab15a1be86303be92c3759c78a00e496aa99453479fc6bf25b4717598d3ad4" Feb 04 09:06:59 crc kubenswrapper[4644]: I0204 09:06:59.635100 4644 scope.go:117] "RemoveContainer" containerID="b8d2f9d0d5eba856866eb0223eb8fe1bc3726f9794663cd8255d3d6165529fc6" Feb 04 09:07:05 crc kubenswrapper[4644]: I0204 09:07:05.555356 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:07:05 crc kubenswrapper[4644]: I0204 09:07:05.556426 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:07:35 crc kubenswrapper[4644]: I0204 09:07:35.555027 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:07:35 crc kubenswrapper[4644]: I0204 09:07:35.555739 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:07:59 crc kubenswrapper[4644]: I0204 09:07:59.687616 4644 scope.go:117] "RemoveContainer" containerID="81f16fdc203970319ecb23e7416371815a2c6e54217d6c427b1e56240753ad09" Feb 04 09:08:05 crc kubenswrapper[4644]: I0204 09:08:05.555385 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:08:05 crc kubenswrapper[4644]: I0204 09:08:05.556080 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:08:05 crc kubenswrapper[4644]: I0204 09:08:05.556140 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:08:05 crc kubenswrapper[4644]: I0204 09:08:05.557151 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:08:05 crc kubenswrapper[4644]: I0204 09:08:05.557255 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" gracePeriod=600 Feb 04 09:08:05 crc kubenswrapper[4644]: E0204 09:08:05.710594 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.063672 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gspgp"] Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.073894 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gspgp"] Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.150818 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" exitCode=0 Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.150865 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982"} Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.150902 4644 scope.go:117] "RemoveContainer" containerID="e915b7a995ee5263275a39a64bfa25a45000de9a4285b8f5bfe66a5bbbce8ddf" Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.151524 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:08:06 crc kubenswrapper[4644]: E0204 09:08:06.151786 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:06 crc kubenswrapper[4644]: I0204 09:08:06.670821 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336812a1-9d89-4471-a974-e04f21404612" path="/var/lib/kubelet/pods/336812a1-9d89-4471-a974-e04f21404612/volumes" Feb 04 09:08:09 crc kubenswrapper[4644]: I0204 09:08:09.050890 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7336-account-create-update-xwl7d"] Feb 04 09:08:09 crc kubenswrapper[4644]: I0204 09:08:09.066824 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7336-account-create-update-xwl7d"] Feb 04 09:08:10 crc kubenswrapper[4644]: I0204 09:08:10.677516 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbea064f-1305-4798-b660-7d3aa50cb6a2" path="/var/lib/kubelet/pods/dbea064f-1305-4798-b660-7d3aa50cb6a2/volumes" Feb 04 09:08:13 crc kubenswrapper[4644]: I0204 09:08:13.033699 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2ncgr"] Feb 04 09:08:13 crc kubenswrapper[4644]: I0204 09:08:13.042991 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2ncgr"] Feb 04 09:08:14 crc kubenswrapper[4644]: I0204 09:08:14.669951 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e179b679-97af-4318-bdf1-07aedb5117a6" path="/var/lib/kubelet/pods/e179b679-97af-4318-bdf1-07aedb5117a6/volumes" Feb 04 09:08:15 crc kubenswrapper[4644]: I0204 09:08:15.030942 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8sw9b"] Feb 04 09:08:15 crc kubenswrapper[4644]: I0204 09:08:15.040300 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8sw9b"] Feb 04 09:08:16 crc kubenswrapper[4644]: I0204 09:08:16.681844 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601e8461-e38b-4282-bdf4-2a8465e6623d" path="/var/lib/kubelet/pods/601e8461-e38b-4282-bdf4-2a8465e6623d/volumes" Feb 04 09:08:17 crc kubenswrapper[4644]: I0204 09:08:17.044590 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0b91-account-create-update-f46bp"] Feb 04 09:08:17 crc kubenswrapper[4644]: I0204 09:08:17.058875 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4b99-account-create-update-8vkh4"] Feb 04 09:08:17 crc kubenswrapper[4644]: I0204 09:08:17.068485 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0b91-account-create-update-f46bp"] Feb 04 09:08:17 crc kubenswrapper[4644]: I0204 09:08:17.078184 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4b99-account-create-update-8vkh4"] Feb 04 09:08:17 crc kubenswrapper[4644]: I0204 09:08:17.659075 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:08:17 crc kubenswrapper[4644]: E0204 09:08:17.659667 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.031561 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-867tt"] Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.044324 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k9sw7"] Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.053045 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-867tt"] Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.061974 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k9sw7"] Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.672224 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26972986-ae99-445c-8cfa-ef894a5427c3" path="/var/lib/kubelet/pods/26972986-ae99-445c-8cfa-ef894a5427c3/volumes" Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.673725 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9183f642-1a40-4b25-93d6-0835b34764c1" path="/var/lib/kubelet/pods/9183f642-1a40-4b25-93d6-0835b34764c1/volumes" Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.675747 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93002c20-2499-4505-821e-a86b63dc5d97" path="/var/lib/kubelet/pods/93002c20-2499-4505-821e-a86b63dc5d97/volumes" Feb 04 09:08:18 crc kubenswrapper[4644]: I0204 09:08:18.681875 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17cf99e-6023-4563-9513-f5418f4a252b" path="/var/lib/kubelet/pods/a17cf99e-6023-4563-9513-f5418f4a252b/volumes" Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.038095 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9610-account-create-update-n5g8s"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.050458 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-b6d4b"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.065967 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9610-account-create-update-n5g8s"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.078247 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-daa1-account-create-update-5vttx"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.088076 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-38bd-account-create-update-jkslr"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.095816 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-b6d4b"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.103719 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-daa1-account-create-update-5vttx"] Feb 04 09:08:19 crc kubenswrapper[4644]: I0204 09:08:19.111339 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-38bd-account-create-update-jkslr"] Feb 04 09:08:20 crc kubenswrapper[4644]: I0204 09:08:20.673588 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e102f9-a169-43ec-bc1d-de48e8b59376" path="/var/lib/kubelet/pods/18e102f9-a169-43ec-bc1d-de48e8b59376/volumes" Feb 04 09:08:20 crc kubenswrapper[4644]: I0204 09:08:20.676511 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675884a4-966e-4ca9-b279-e37202cab1d7" path="/var/lib/kubelet/pods/675884a4-966e-4ca9-b279-e37202cab1d7/volumes" Feb 04 09:08:20 crc kubenswrapper[4644]: I0204 09:08:20.679423 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fcf6d5-331b-4a6e-b65c-3c68d28feb65" path="/var/lib/kubelet/pods/86fcf6d5-331b-4a6e-b65c-3c68d28feb65/volumes" Feb 04 09:08:20 crc kubenswrapper[4644]: I0204 09:08:20.682182 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fbd688-db27-4267-aa54-c9c90a1b19ab" path="/var/lib/kubelet/pods/95fbd688-db27-4267-aa54-c9c90a1b19ab/volumes" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.063242 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.065091 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.086613 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.174522 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.174887 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.174971 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxr99\" (UniqueName: \"kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.277492 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.277737 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.277831 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxr99\" (UniqueName: \"kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.278628 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.278903 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.350392 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxr99\" (UniqueName: \"kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99\") pod \"community-operators-kvhql\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.398510 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:21 crc kubenswrapper[4644]: I0204 09:08:21.871455 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.046927 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wt9t7"] Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.071759 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wt9t7"] Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.336941 4644 generic.go:334] "Generic (PLEG): container finished" podID="4da13e90-7472-4d4f-bb65-6e830608016b" containerID="da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b" exitCode=0 Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.338460 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerDied","Data":"da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b"} Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.338521 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerStarted","Data":"6a1193135afe18d95732648ac7a6f54364571250def14574b12b9fe8f8b02c55"} Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.340128 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:08:22 crc kubenswrapper[4644]: I0204 09:08:22.675121 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2f0296-f7ca-4c5d-bbf0-d77692ee814b" path="/var/lib/kubelet/pods/9c2f0296-f7ca-4c5d-bbf0-d77692ee814b/volumes" Feb 04 09:08:23 crc kubenswrapper[4644]: I0204 09:08:23.352932 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerStarted","Data":"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc"} Feb 04 09:08:27 crc kubenswrapper[4644]: I0204 09:08:27.386129 4644 generic.go:334] "Generic (PLEG): container finished" podID="4da13e90-7472-4d4f-bb65-6e830608016b" containerID="ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc" exitCode=0 Feb 04 09:08:27 crc kubenswrapper[4644]: I0204 09:08:27.386179 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerDied","Data":"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc"} Feb 04 09:08:29 crc kubenswrapper[4644]: I0204 09:08:29.410274 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerStarted","Data":"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10"} Feb 04 09:08:29 crc kubenswrapper[4644]: I0204 09:08:29.454995 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvhql" podStartSLOduration=2.464736134 podStartE2EDuration="8.454973002s" podCreationTimestamp="2026-02-04 09:08:21 +0000 UTC" firstStartedPulling="2026-02-04 09:08:22.339908785 +0000 UTC m=+1612.379966540" lastFinishedPulling="2026-02-04 09:08:28.330145653 +0000 UTC m=+1618.370203408" observedRunningTime="2026-02-04 09:08:29.443835058 +0000 UTC m=+1619.483892823" watchObservedRunningTime="2026-02-04 09:08:29.454973002 +0000 UTC m=+1619.495030777" Feb 04 09:08:31 crc kubenswrapper[4644]: I0204 09:08:31.399933 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:31 crc kubenswrapper[4644]: I0204 09:08:31.400253 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:08:31 crc kubenswrapper[4644]: I0204 09:08:31.659773 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:08:31 crc kubenswrapper[4644]: E0204 09:08:31.660211 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:32 crc kubenswrapper[4644]: I0204 09:08:32.463759 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kvhql" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" probeResult="failure" output=< Feb 04 09:08:32 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:08:32 crc kubenswrapper[4644]: > Feb 04 09:08:42 crc kubenswrapper[4644]: I0204 09:08:42.453906 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kvhql" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" probeResult="failure" output=< Feb 04 09:08:42 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:08:42 crc kubenswrapper[4644]: > Feb 04 09:08:42 crc kubenswrapper[4644]: I0204 09:08:42.659959 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:08:42 crc kubenswrapper[4644]: E0204 09:08:42.660208 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:52 crc kubenswrapper[4644]: I0204 09:08:52.450351 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kvhql" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" probeResult="failure" output=< Feb 04 09:08:52 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:08:52 crc kubenswrapper[4644]: > Feb 04 09:08:57 crc kubenswrapper[4644]: I0204 09:08:57.660709 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:08:57 crc kubenswrapper[4644]: E0204 09:08:57.661558 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:08:59 crc kubenswrapper[4644]: I0204 09:08:59.930959 4644 scope.go:117] "RemoveContainer" containerID="ff090da455a8744d4e5521ddad8de8fdafca5cf03e6f1011ae84346550cab925" Feb 04 09:08:59 crc kubenswrapper[4644]: I0204 09:08:59.967154 4644 scope.go:117] "RemoveContainer" containerID="9659a9c8278a66f0e2139848dd1fad92993f0010801b721a43986571fa12ce22" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.018017 4644 scope.go:117] "RemoveContainer" containerID="ab907603f08ccd9cea4238caecdddb1f64bf1ba7180a79313a9ae6d14d78d4a9" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.068283 4644 scope.go:117] "RemoveContainer" containerID="fa75280a8573d2a22be14b02d722c1fbc7505713694c4335be19c2cf46b46498" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.126302 4644 scope.go:117] "RemoveContainer" containerID="bdb3c12ec6143ceda9b880281d7c87f98796f2ff0a73ffdc4fa2674590e0fdc1" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.154799 4644 scope.go:117] "RemoveContainer" containerID="86ee48f2731172d403f473ebbb3ba38a69dc40396ca65823bfbf5730cbeb18f2" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.203038 4644 scope.go:117] "RemoveContainer" containerID="6f35bbf13c80b1888ba9266546f5e4a1181b8c282f7162a9edcf970e9384efa8" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.230711 4644 scope.go:117] "RemoveContainer" containerID="5d773e64a637cc3d11cda5d41d2669f4fd9bd396859ee5d9f123bf27bb062e40" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.259618 4644 scope.go:117] "RemoveContainer" containerID="073c3cb69189606e749e4e677cd0448d889c69f04a62e52efd6e47ccdb7c738d" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.279676 4644 scope.go:117] "RemoveContainer" containerID="032e73a144ee913df2be083403248060f67dcab80e539c86ce3751244569299e" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.307870 4644 scope.go:117] "RemoveContainer" containerID="8c5204b4112bbeac34302e484ab4714a6f731942b1871d6a182cae93c49f2035" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.328154 4644 scope.go:117] "RemoveContainer" containerID="11d34483e855f0e997f4ebbd3c9ecc5a8f37e501222216c42ffaec1f064a9986" Feb 04 09:09:00 crc kubenswrapper[4644]: I0204 09:09:00.350261 4644 scope.go:117] "RemoveContainer" containerID="14902e2f24d6274b974c91458f166caf00bc6dd0d1f7858fddafec27c6e4e6b4" Feb 04 09:09:01 crc kubenswrapper[4644]: I0204 09:09:01.453605 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:09:01 crc kubenswrapper[4644]: I0204 09:09:01.508126 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:09:01 crc kubenswrapper[4644]: I0204 09:09:01.696226 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:09:03 crc kubenswrapper[4644]: I0204 09:09:03.063869 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kvhql" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" containerID="cri-o://c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10" gracePeriod=2 Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.056501 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.076838 4644 generic.go:334] "Generic (PLEG): container finished" podID="4da13e90-7472-4d4f-bb65-6e830608016b" containerID="c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10" exitCode=0 Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.076924 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerDied","Data":"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10"} Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.077881 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhql" event={"ID":"4da13e90-7472-4d4f-bb65-6e830608016b","Type":"ContainerDied","Data":"6a1193135afe18d95732648ac7a6f54364571250def14574b12b9fe8f8b02c55"} Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.076935 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhql" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.077948 4644 scope.go:117] "RemoveContainer" containerID="c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.099798 4644 scope.go:117] "RemoveContainer" containerID="ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.127859 4644 scope.go:117] "RemoveContainer" containerID="da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.145148 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxr99\" (UniqueName: \"kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99\") pod \"4da13e90-7472-4d4f-bb65-6e830608016b\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.145374 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities\") pod \"4da13e90-7472-4d4f-bb65-6e830608016b\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.145460 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content\") pod \"4da13e90-7472-4d4f-bb65-6e830608016b\" (UID: \"4da13e90-7472-4d4f-bb65-6e830608016b\") " Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.148638 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities" (OuterVolumeSpecName: "utilities") pod "4da13e90-7472-4d4f-bb65-6e830608016b" (UID: "4da13e90-7472-4d4f-bb65-6e830608016b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.159921 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99" (OuterVolumeSpecName: "kube-api-access-nxr99") pod "4da13e90-7472-4d4f-bb65-6e830608016b" (UID: "4da13e90-7472-4d4f-bb65-6e830608016b"). InnerVolumeSpecName "kube-api-access-nxr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.182079 4644 scope.go:117] "RemoveContainer" containerID="c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10" Feb 04 09:09:04 crc kubenswrapper[4644]: E0204 09:09:04.187621 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10\": container with ID starting with c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10 not found: ID does not exist" containerID="c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.187706 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10"} err="failed to get container status \"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10\": rpc error: code = NotFound desc = could not find container \"c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10\": container with ID starting with c15b7a49208f4d859bfb095a325979b6a1fbafa0b9df7ae934712d60572d3c10 not found: ID does not exist" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.187758 4644 scope.go:117] "RemoveContainer" containerID="ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc" Feb 04 09:09:04 crc kubenswrapper[4644]: E0204 09:09:04.188246 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc\": container with ID starting with ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc not found: ID does not exist" containerID="ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.188391 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc"} err="failed to get container status \"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc\": rpc error: code = NotFound desc = could not find container \"ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc\": container with ID starting with ba957ede0c757e228176bc0d6e9dfc6a8379f6d47d4268eb9802aa3c3f5982dc not found: ID does not exist" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.188496 4644 scope.go:117] "RemoveContainer" containerID="da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b" Feb 04 09:09:04 crc kubenswrapper[4644]: E0204 09:09:04.188919 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b\": container with ID starting with da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b not found: ID does not exist" containerID="da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.188998 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b"} err="failed to get container status \"da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b\": rpc error: code = NotFound desc = could not find container \"da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b\": container with ID starting with da9c9af426903e40dde413f2697e4d4d0f3c4e5eb4a29f433ce2345600d52b8b not found: ID does not exist" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.218680 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4da13e90-7472-4d4f-bb65-6e830608016b" (UID: "4da13e90-7472-4d4f-bb65-6e830608016b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.247860 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.247891 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxr99\" (UniqueName: \"kubernetes.io/projected/4da13e90-7472-4d4f-bb65-6e830608016b-kube-api-access-nxr99\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.247904 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da13e90-7472-4d4f-bb65-6e830608016b-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.437445 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.447272 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kvhql"] Feb 04 09:09:04 crc kubenswrapper[4644]: I0204 09:09:04.671794 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" path="/var/lib/kubelet/pods/4da13e90-7472-4d4f-bb65-6e830608016b/volumes" Feb 04 09:09:06 crc kubenswrapper[4644]: I0204 09:09:06.047986 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f4mlq"] Feb 04 09:09:06 crc kubenswrapper[4644]: I0204 09:09:06.068070 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f4mlq"] Feb 04 09:09:06 crc kubenswrapper[4644]: I0204 09:09:06.701351 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e10c7c-ed36-46c7-8f45-1428c6a7fa31" path="/var/lib/kubelet/pods/b6e10c7c-ed36-46c7-8f45-1428c6a7fa31/volumes" Feb 04 09:09:11 crc kubenswrapper[4644]: I0204 09:09:11.659398 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:09:11 crc kubenswrapper[4644]: E0204 09:09:11.660064 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:09:26 crc kubenswrapper[4644]: I0204 09:09:26.660453 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:09:26 crc kubenswrapper[4644]: E0204 09:09:26.662707 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:09:28 crc kubenswrapper[4644]: I0204 09:09:28.284479 4644 generic.go:334] "Generic (PLEG): container finished" podID="308d165a-5458-4e82-936c-b7a25ebfcbe6" containerID="162c84a89b192ea944ca55feaa3d4b2d8895ae8d5a36e6e1048cb58bb97573b7" exitCode=0 Feb 04 09:09:28 crc kubenswrapper[4644]: I0204 09:09:28.284798 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" event={"ID":"308d165a-5458-4e82-936c-b7a25ebfcbe6","Type":"ContainerDied","Data":"162c84a89b192ea944ca55feaa3d4b2d8895ae8d5a36e6e1048cb58bb97573b7"} Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.761170 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.856917 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam\") pod \"308d165a-5458-4e82-936c-b7a25ebfcbe6\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.857285 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory\") pod \"308d165a-5458-4e82-936c-b7a25ebfcbe6\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.857371 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slm5z\" (UniqueName: \"kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z\") pod \"308d165a-5458-4e82-936c-b7a25ebfcbe6\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.857404 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle\") pod \"308d165a-5458-4e82-936c-b7a25ebfcbe6\" (UID: \"308d165a-5458-4e82-936c-b7a25ebfcbe6\") " Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.883789 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "308d165a-5458-4e82-936c-b7a25ebfcbe6" (UID: "308d165a-5458-4e82-936c-b7a25ebfcbe6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.884453 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z" (OuterVolumeSpecName: "kube-api-access-slm5z") pod "308d165a-5458-4e82-936c-b7a25ebfcbe6" (UID: "308d165a-5458-4e82-936c-b7a25ebfcbe6"). InnerVolumeSpecName "kube-api-access-slm5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.888242 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory" (OuterVolumeSpecName: "inventory") pod "308d165a-5458-4e82-936c-b7a25ebfcbe6" (UID: "308d165a-5458-4e82-936c-b7a25ebfcbe6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.909933 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "308d165a-5458-4e82-936c-b7a25ebfcbe6" (UID: "308d165a-5458-4e82-936c-b7a25ebfcbe6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.959171 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.959205 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slm5z\" (UniqueName: \"kubernetes.io/projected/308d165a-5458-4e82-936c-b7a25ebfcbe6-kube-api-access-slm5z\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.959217 4644 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:29 crc kubenswrapper[4644]: I0204 09:09:29.959226 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/308d165a-5458-4e82-936c-b7a25ebfcbe6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.317481 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" event={"ID":"308d165a-5458-4e82-936c-b7a25ebfcbe6","Type":"ContainerDied","Data":"ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28"} Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.317538 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2cb022b1945eb971d2871d46bf809c544e29b516b69a456e7fb4594c582a28" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.317600 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402243 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w"] Feb 04 09:09:30 crc kubenswrapper[4644]: E0204 09:09:30.402647 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="extract-content" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402664 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="extract-content" Feb 04 09:09:30 crc kubenswrapper[4644]: E0204 09:09:30.402690 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="extract-utilities" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402696 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="extract-utilities" Feb 04 09:09:30 crc kubenswrapper[4644]: E0204 09:09:30.402706 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308d165a-5458-4e82-936c-b7a25ebfcbe6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402713 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="308d165a-5458-4e82-936c-b7a25ebfcbe6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 09:09:30 crc kubenswrapper[4644]: E0204 09:09:30.402729 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402735 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402911 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da13e90-7472-4d4f-bb65-6e830608016b" containerName="registry-server" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.402935 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="308d165a-5458-4e82-936c-b7a25ebfcbe6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.403523 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.406713 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.407502 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.419243 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w"] Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.421440 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.421734 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.570186 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.570266 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6zq\" (UniqueName: \"kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.570387 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.671617 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.672030 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6zq\" (UniqueName: \"kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.672160 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.674108 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.674280 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.689095 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.691123 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.692687 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6zq\" (UniqueName: \"kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.728084 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:09:30 crc kubenswrapper[4644]: I0204 09:09:30.736377 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:09:31 crc kubenswrapper[4644]: I0204 09:09:31.351944 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w"] Feb 04 09:09:32 crc kubenswrapper[4644]: I0204 09:09:32.337542 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" event={"ID":"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58","Type":"ContainerStarted","Data":"2b7a1b540fe181988185c5b3391489c53f9d94f3b98945af6e99e90c27a5595d"} Feb 04 09:09:32 crc kubenswrapper[4644]: I0204 09:09:32.652668 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:09:33 crc kubenswrapper[4644]: I0204 09:09:33.360747 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" event={"ID":"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58","Type":"ContainerStarted","Data":"3e80651715d9afd0bd7f5016e73501b225afa0930f467fc4ef141d9aabf0dd02"} Feb 04 09:09:33 crc kubenswrapper[4644]: I0204 09:09:33.394968 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" podStartSLOduration=2.098177655 podStartE2EDuration="3.394941726s" podCreationTimestamp="2026-02-04 09:09:30 +0000 UTC" firstStartedPulling="2026-02-04 09:09:31.35112145 +0000 UTC m=+1681.391179205" lastFinishedPulling="2026-02-04 09:09:32.647885521 +0000 UTC m=+1682.687943276" observedRunningTime="2026-02-04 09:09:33.384159422 +0000 UTC m=+1683.424217257" watchObservedRunningTime="2026-02-04 09:09:33.394941726 +0000 UTC m=+1683.434999521" Feb 04 09:09:39 crc kubenswrapper[4644]: I0204 09:09:39.039817 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6dczk"] Feb 04 09:09:39 crc kubenswrapper[4644]: I0204 09:09:39.049764 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6dczk"] Feb 04 09:09:39 crc kubenswrapper[4644]: I0204 09:09:39.659651 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:09:39 crc kubenswrapper[4644]: E0204 09:09:39.660005 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:09:40 crc kubenswrapper[4644]: I0204 09:09:40.675136 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1573f43-1a60-4b32-8286-02fb06f9d3a8" path="/var/lib/kubelet/pods/f1573f43-1a60-4b32-8286-02fb06f9d3a8/volumes" Feb 04 09:09:54 crc kubenswrapper[4644]: I0204 09:09:54.660213 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:09:54 crc kubenswrapper[4644]: E0204 09:09:54.660988 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:10:00 crc kubenswrapper[4644]: I0204 09:10:00.641496 4644 scope.go:117] "RemoveContainer" containerID="8f930a01575e8ee275b492efdf0edac94a3b843708612bc102aa5552ad07850e" Feb 04 09:10:00 crc kubenswrapper[4644]: I0204 09:10:00.685291 4644 scope.go:117] "RemoveContainer" containerID="09b1294a05fa778de3acf6ef8efe8aae0b263ea1a2b383f62ff9dbbfb5efff9e" Feb 04 09:10:00 crc kubenswrapper[4644]: I0204 09:10:00.731272 4644 scope.go:117] "RemoveContainer" containerID="df6f829c758f962e9320cc979fbe36d67e9a53acebf0229f610c1282a63aeb7b" Feb 04 09:10:00 crc kubenswrapper[4644]: I0204 09:10:00.755723 4644 scope.go:117] "RemoveContainer" containerID="fd7100471e7661330f3d0a9dbe264cf5cfea675d295fb99705c0588b8176eb5f" Feb 04 09:10:07 crc kubenswrapper[4644]: I0204 09:10:07.659305 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:10:07 crc kubenswrapper[4644]: E0204 09:10:07.660575 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:10:10 crc kubenswrapper[4644]: I0204 09:10:10.044674 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-47ns9"] Feb 04 09:10:10 crc kubenswrapper[4644]: I0204 09:10:10.053541 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-47ns9"] Feb 04 09:10:10 crc kubenswrapper[4644]: I0204 09:10:10.673173 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30388d1-fcaa-4680-ba0e-7cf6b071c356" path="/var/lib/kubelet/pods/d30388d1-fcaa-4680-ba0e-7cf6b071c356/volumes" Feb 04 09:10:19 crc kubenswrapper[4644]: I0204 09:10:19.660383 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:10:19 crc kubenswrapper[4644]: E0204 09:10:19.661059 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:10:23 crc kubenswrapper[4644]: I0204 09:10:23.029304 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9pl45"] Feb 04 09:10:23 crc kubenswrapper[4644]: I0204 09:10:23.036952 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9pl45"] Feb 04 09:10:24 crc kubenswrapper[4644]: I0204 09:10:24.670962 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc65018-86ae-4c36-bf21-3849c09ee648" path="/var/lib/kubelet/pods/bcc65018-86ae-4c36-bf21-3849c09ee648/volumes" Feb 04 09:10:31 crc kubenswrapper[4644]: I0204 09:10:31.660452 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:10:31 crc kubenswrapper[4644]: E0204 09:10:31.661233 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:10:33 crc kubenswrapper[4644]: I0204 09:10:33.031357 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6tq9r"] Feb 04 09:10:33 crc kubenswrapper[4644]: I0204 09:10:33.042901 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6tq9r"] Feb 04 09:10:34 crc kubenswrapper[4644]: I0204 09:10:34.672303 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4623241a-c4dc-4646-9b03-aa89b84ca4b1" path="/var/lib/kubelet/pods/4623241a-c4dc-4646-9b03-aa89b84ca4b1/volumes" Feb 04 09:10:35 crc kubenswrapper[4644]: I0204 09:10:35.040069 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nlr7w"] Feb 04 09:10:35 crc kubenswrapper[4644]: I0204 09:10:35.048167 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nlr7w"] Feb 04 09:10:36 crc kubenswrapper[4644]: I0204 09:10:36.675474 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6677efd-b2e4-45b7-8703-3a189d87723d" path="/var/lib/kubelet/pods/c6677efd-b2e4-45b7-8703-3a189d87723d/volumes" Feb 04 09:10:37 crc kubenswrapper[4644]: I0204 09:10:37.033375 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wlwtb"] Feb 04 09:10:37 crc kubenswrapper[4644]: I0204 09:10:37.044169 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wlwtb"] Feb 04 09:10:38 crc kubenswrapper[4644]: I0204 09:10:38.671166 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6df95b1-d952-4b17-bb90-2a32fecb0a5b" path="/var/lib/kubelet/pods/f6df95b1-d952-4b17-bb90-2a32fecb0a5b/volumes" Feb 04 09:10:42 crc kubenswrapper[4644]: I0204 09:10:42.661182 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:10:42 crc kubenswrapper[4644]: E0204 09:10:42.661804 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:10:53 crc kubenswrapper[4644]: I0204 09:10:53.660402 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:10:53 crc kubenswrapper[4644]: E0204 09:10:53.661220 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:11:00 crc kubenswrapper[4644]: I0204 09:11:00.877986 4644 scope.go:117] "RemoveContainer" containerID="2706aa2342f9aea54fa160b95e38c4506698fd9a10b92137e2b2993b35e705fd" Feb 04 09:11:00 crc kubenswrapper[4644]: I0204 09:11:00.906311 4644 scope.go:117] "RemoveContainer" containerID="30570130a3239b29d0b7cd583c4eefdd2b82a5f8b87fc328a416e2332a72b513" Feb 04 09:11:00 crc kubenswrapper[4644]: I0204 09:11:00.949895 4644 scope.go:117] "RemoveContainer" containerID="334bb5ebaa42cec7c5168838f937cf076e611ae95464a97820900a1878fbd00d" Feb 04 09:11:01 crc kubenswrapper[4644]: I0204 09:11:01.024486 4644 scope.go:117] "RemoveContainer" containerID="670c2e5f4ed96c187b0beb28f86190ee1f82d5b48e3b119af7d03ec908f58abf" Feb 04 09:11:01 crc kubenswrapper[4644]: I0204 09:11:01.052225 4644 scope.go:117] "RemoveContainer" containerID="e9b4afe63498807468ab2afa12176fe6e890d0486f5c4adff425ef21e97a8a10" Feb 04 09:11:01 crc kubenswrapper[4644]: I0204 09:11:01.074912 4644 scope.go:117] "RemoveContainer" containerID="b4ea2edce7481898ede27f2e0632b2493e34a218426087dca53f773598f2528c" Feb 04 09:11:01 crc kubenswrapper[4644]: I0204 09:11:01.117992 4644 scope.go:117] "RemoveContainer" containerID="4a88d8a2e5ca1806dab14a6ce567ef62dc89f54c07b4be8dc54d2a78f4daf780" Feb 04 09:11:07 crc kubenswrapper[4644]: I0204 09:11:07.660004 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:11:07 crc kubenswrapper[4644]: E0204 09:11:07.660690 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:11:22 crc kubenswrapper[4644]: I0204 09:11:22.660403 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:11:22 crc kubenswrapper[4644]: E0204 09:11:22.662716 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:11:29 crc kubenswrapper[4644]: I0204 09:11:29.476432 4644 generic.go:334] "Generic (PLEG): container finished" podID="01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" containerID="3e80651715d9afd0bd7f5016e73501b225afa0930f467fc4ef141d9aabf0dd02" exitCode=0 Feb 04 09:11:29 crc kubenswrapper[4644]: I0204 09:11:29.476495 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" event={"ID":"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58","Type":"ContainerDied","Data":"3e80651715d9afd0bd7f5016e73501b225afa0930f467fc4ef141d9aabf0dd02"} Feb 04 09:11:30 crc kubenswrapper[4644]: I0204 09:11:30.893829 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.058468 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6zq\" (UniqueName: \"kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq\") pod \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.058545 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory\") pod \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.058683 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam\") pod \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\" (UID: \"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58\") " Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.068521 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq" (OuterVolumeSpecName: "kube-api-access-2x6zq") pod "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" (UID: "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58"). InnerVolumeSpecName "kube-api-access-2x6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.089119 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory" (OuterVolumeSpecName: "inventory") pod "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" (UID: "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.111846 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" (UID: "01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.161562 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6zq\" (UniqueName: \"kubernetes.io/projected/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-kube-api-access-2x6zq\") on node \"crc\" DevicePath \"\"" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.161605 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.161618 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.495832 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" event={"ID":"01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58","Type":"ContainerDied","Data":"2b7a1b540fe181988185c5b3391489c53f9d94f3b98945af6e99e90c27a5595d"} Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.495881 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7a1b540fe181988185c5b3391489c53f9d94f3b98945af6e99e90c27a5595d" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.495928 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.613310 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw"] Feb 04 09:11:31 crc kubenswrapper[4644]: E0204 09:11:31.613895 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.613922 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.614178 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.614982 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.634805 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.635188 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.635422 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.635719 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.669589 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw"] Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.773080 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvgr4\" (UniqueName: \"kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.773354 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.773420 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.874861 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.874999 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvgr4\" (UniqueName: \"kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.875105 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.879677 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.879923 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.895657 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvgr4\" (UniqueName: \"kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:31 crc kubenswrapper[4644]: I0204 09:11:31.948692 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:11:32 crc kubenswrapper[4644]: I0204 09:11:32.499059 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw"] Feb 04 09:11:33 crc kubenswrapper[4644]: I0204 09:11:33.517789 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" event={"ID":"54674fd4-5080-4cea-8cf9-7c6bbd9c53de","Type":"ContainerStarted","Data":"d9bacd0654e8084515623a7aca09d6f97fa65db929cd4b7b57ebed08797b1d4a"} Feb 04 09:11:33 crc kubenswrapper[4644]: I0204 09:11:33.518444 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" event={"ID":"54674fd4-5080-4cea-8cf9-7c6bbd9c53de","Type":"ContainerStarted","Data":"0e2dbc4a0a6dfb8425d99f154c94e490f5a598ea4bcf6c7888f37a64680b35c4"} Feb 04 09:11:33 crc kubenswrapper[4644]: I0204 09:11:33.551983 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" podStartSLOduration=1.9264266 podStartE2EDuration="2.551960974s" podCreationTimestamp="2026-02-04 09:11:31 +0000 UTC" firstStartedPulling="2026-02-04 09:11:32.506185804 +0000 UTC m=+1802.546243559" lastFinishedPulling="2026-02-04 09:11:33.131720168 +0000 UTC m=+1803.171777933" observedRunningTime="2026-02-04 09:11:33.544398719 +0000 UTC m=+1803.584456484" watchObservedRunningTime="2026-02-04 09:11:33.551960974 +0000 UTC m=+1803.592018729" Feb 04 09:11:35 crc kubenswrapper[4644]: I0204 09:11:35.660886 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:11:35 crc kubenswrapper[4644]: E0204 09:11:35.661575 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:11:38 crc kubenswrapper[4644]: I0204 09:11:38.044580 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-09da-account-create-update-x9b8b"] Feb 04 09:11:38 crc kubenswrapper[4644]: I0204 09:11:38.054103 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-09da-account-create-update-x9b8b"] Feb 04 09:11:38 crc kubenswrapper[4644]: I0204 09:11:38.675491 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80419f02-9077-41fe-94d7-eed2c3dbdd46" path="/var/lib/kubelet/pods/80419f02-9077-41fe-94d7-eed2c3dbdd46/volumes" Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.037260 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zffrn"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.046164 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vs7nh"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.053886 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0b44-account-create-update-td6dv"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.061465 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f4e7-account-create-update-s9zj2"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.070158 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hfqtm"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.078618 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0b44-account-create-update-td6dv"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.086117 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f4e7-account-create-update-s9zj2"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.107750 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vs7nh"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.115283 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zffrn"] Feb 04 09:11:39 crc kubenswrapper[4644]: I0204 09:11:39.126372 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hfqtm"] Feb 04 09:11:40 crc kubenswrapper[4644]: I0204 09:11:40.671211 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1296c955-bd6d-4611-8bfa-abe4658610e1" path="/var/lib/kubelet/pods/1296c955-bd6d-4611-8bfa-abe4658610e1/volumes" Feb 04 09:11:40 crc kubenswrapper[4644]: I0204 09:11:40.672056 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e675941-cfe4-450e-87e8-8f0e0e68d7de" path="/var/lib/kubelet/pods/2e675941-cfe4-450e-87e8-8f0e0e68d7de/volumes" Feb 04 09:11:40 crc kubenswrapper[4644]: I0204 09:11:40.672588 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef1c0ca-ffc0-495d-a4d1-da74c34137fe" path="/var/lib/kubelet/pods/5ef1c0ca-ffc0-495d-a4d1-da74c34137fe/volumes" Feb 04 09:11:40 crc kubenswrapper[4644]: I0204 09:11:40.673078 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70186d10-7adc-40ec-b71b-0d9cd786d034" path="/var/lib/kubelet/pods/70186d10-7adc-40ec-b71b-0d9cd786d034/volumes" Feb 04 09:11:40 crc kubenswrapper[4644]: I0204 09:11:40.674032 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd0b3ba-0734-4358-886a-28dfc62ff494" path="/var/lib/kubelet/pods/7dd0b3ba-0734-4358-886a-28dfc62ff494/volumes" Feb 04 09:11:46 crc kubenswrapper[4644]: I0204 09:11:46.660033 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:11:46 crc kubenswrapper[4644]: E0204 09:11:46.660878 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.768512 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.771192 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.780552 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.858195 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.858584 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntz9\" (UniqueName: \"kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.858609 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.960709 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.960863 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ntz9\" (UniqueName: \"kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.960909 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.961561 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.961745 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:50 crc kubenswrapper[4644]: I0204 09:11:50.981930 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ntz9\" (UniqueName: \"kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9\") pod \"redhat-operators-gzknw\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:51 crc kubenswrapper[4644]: I0204 09:11:51.093475 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:11:51 crc kubenswrapper[4644]: I0204 09:11:51.577603 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:11:51 crc kubenswrapper[4644]: W0204 09:11:51.578090 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad1bbb7b_939c_4cf1_bfd8_44f3a008aa36.slice/crio-12fb2df41be9145da7410bda1238cec2ae546bae3863d26c4fefb298c4bb14b9 WatchSource:0}: Error finding container 12fb2df41be9145da7410bda1238cec2ae546bae3863d26c4fefb298c4bb14b9: Status 404 returned error can't find the container with id 12fb2df41be9145da7410bda1238cec2ae546bae3863d26c4fefb298c4bb14b9 Feb 04 09:11:51 crc kubenswrapper[4644]: I0204 09:11:51.693998 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerStarted","Data":"12fb2df41be9145da7410bda1238cec2ae546bae3863d26c4fefb298c4bb14b9"} Feb 04 09:11:52 crc kubenswrapper[4644]: I0204 09:11:52.707909 4644 generic.go:334] "Generic (PLEG): container finished" podID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerID="8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1" exitCode=0 Feb 04 09:11:52 crc kubenswrapper[4644]: I0204 09:11:52.708024 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerDied","Data":"8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1"} Feb 04 09:11:54 crc kubenswrapper[4644]: I0204 09:11:54.727438 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerStarted","Data":"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8"} Feb 04 09:11:57 crc kubenswrapper[4644]: I0204 09:11:57.660175 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:11:57 crc kubenswrapper[4644]: E0204 09:11:57.661180 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.270815 4644 scope.go:117] "RemoveContainer" containerID="be1faf799218fa365f6123a04a1ac22818c8ce75f4cea3b2eb0d058f20d1ba68" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.297844 4644 scope.go:117] "RemoveContainer" containerID="90fea62efc85d5be2c74d55bfff5182e77dc9b19ba41117616a092602f115056" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.334448 4644 scope.go:117] "RemoveContainer" containerID="6c16233525e625ecab3d118541453b1d23af7296b0fb53135f67b6771916e66b" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.376159 4644 scope.go:117] "RemoveContainer" containerID="64da4c7367a472bfa748cfe1f4cf4aa91436d4de9cbc3eb92fd8c78f64fb3efd" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.426951 4644 scope.go:117] "RemoveContainer" containerID="81b784ceb4237fd881f757f52eaea09b38df2fcde2ac49f50355cf7346e4f327" Feb 04 09:12:01 crc kubenswrapper[4644]: I0204 09:12:01.472974 4644 scope.go:117] "RemoveContainer" containerID="70e25a297dd5c073fec613b3aea2a3be5fdd619956b25e4b9ebf2375a7bcab58" Feb 04 09:12:03 crc kubenswrapper[4644]: I0204 09:12:03.825833 4644 generic.go:334] "Generic (PLEG): container finished" podID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerID="87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8" exitCode=0 Feb 04 09:12:03 crc kubenswrapper[4644]: I0204 09:12:03.825888 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerDied","Data":"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8"} Feb 04 09:12:04 crc kubenswrapper[4644]: I0204 09:12:04.838593 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerStarted","Data":"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e"} Feb 04 09:12:04 crc kubenswrapper[4644]: I0204 09:12:04.867652 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzknw" podStartSLOduration=3.283861875 podStartE2EDuration="14.867634553s" podCreationTimestamp="2026-02-04 09:11:50 +0000 UTC" firstStartedPulling="2026-02-04 09:11:52.710217274 +0000 UTC m=+1822.750275029" lastFinishedPulling="2026-02-04 09:12:04.293989952 +0000 UTC m=+1834.334047707" observedRunningTime="2026-02-04 09:12:04.858358513 +0000 UTC m=+1834.898416288" watchObservedRunningTime="2026-02-04 09:12:04.867634553 +0000 UTC m=+1834.907692308" Feb 04 09:12:10 crc kubenswrapper[4644]: I0204 09:12:10.666626 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:12:10 crc kubenswrapper[4644]: E0204 09:12:10.667377 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:12:11 crc kubenswrapper[4644]: I0204 09:12:11.094071 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:12:11 crc kubenswrapper[4644]: I0204 09:12:11.094144 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:12:12 crc kubenswrapper[4644]: I0204 09:12:12.141913 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:12:12 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:12:12 crc kubenswrapper[4644]: > Feb 04 09:12:21 crc kubenswrapper[4644]: I0204 09:12:21.660066 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:12:21 crc kubenswrapper[4644]: E0204 09:12:21.660959 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:12:22 crc kubenswrapper[4644]: I0204 09:12:22.141933 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:12:22 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:12:22 crc kubenswrapper[4644]: > Feb 04 09:12:25 crc kubenswrapper[4644]: I0204 09:12:25.039005 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ng8rj"] Feb 04 09:12:25 crc kubenswrapper[4644]: I0204 09:12:25.047553 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ng8rj"] Feb 04 09:12:26 crc kubenswrapper[4644]: I0204 09:12:26.673739 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0880634-6912-4a8b-98b2-b18209a19896" path="/var/lib/kubelet/pods/a0880634-6912-4a8b-98b2-b18209a19896/volumes" Feb 04 09:12:32 crc kubenswrapper[4644]: I0204 09:12:32.146049 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:12:32 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:12:32 crc kubenswrapper[4644]: > Feb 04 09:12:34 crc kubenswrapper[4644]: I0204 09:12:34.659973 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:12:34 crc kubenswrapper[4644]: E0204 09:12:34.661177 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:12:42 crc kubenswrapper[4644]: I0204 09:12:42.150461 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:12:42 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:12:42 crc kubenswrapper[4644]: > Feb 04 09:12:47 crc kubenswrapper[4644]: I0204 09:12:47.661185 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:12:47 crc kubenswrapper[4644]: E0204 09:12:47.662212 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:12:52 crc kubenswrapper[4644]: I0204 09:12:52.141669 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:12:52 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:12:52 crc kubenswrapper[4644]: > Feb 04 09:12:53 crc kubenswrapper[4644]: I0204 09:12:53.050475 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2hj8"] Feb 04 09:12:53 crc kubenswrapper[4644]: I0204 09:12:53.059390 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2hj8"] Feb 04 09:12:53 crc kubenswrapper[4644]: I0204 09:12:53.259821 4644 generic.go:334] "Generic (PLEG): container finished" podID="54674fd4-5080-4cea-8cf9-7c6bbd9c53de" containerID="d9bacd0654e8084515623a7aca09d6f97fa65db929cd4b7b57ebed08797b1d4a" exitCode=0 Feb 04 09:12:53 crc kubenswrapper[4644]: I0204 09:12:53.259871 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" event={"ID":"54674fd4-5080-4cea-8cf9-7c6bbd9c53de","Type":"ContainerDied","Data":"d9bacd0654e8084515623a7aca09d6f97fa65db929cd4b7b57ebed08797b1d4a"} Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.682707 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bbfe4b-94f7-4d67-9486-84fe0d0148a7" path="/var/lib/kubelet/pods/13bbfe4b-94f7-4d67-9486-84fe0d0148a7/volumes" Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.921074 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.974126 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam\") pod \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.974443 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory\") pod \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.975000 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvgr4\" (UniqueName: \"kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4\") pod \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\" (UID: \"54674fd4-5080-4cea-8cf9-7c6bbd9c53de\") " Feb 04 09:12:54 crc kubenswrapper[4644]: I0204 09:12:54.986057 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4" (OuterVolumeSpecName: "kube-api-access-hvgr4") pod "54674fd4-5080-4cea-8cf9-7c6bbd9c53de" (UID: "54674fd4-5080-4cea-8cf9-7c6bbd9c53de"). InnerVolumeSpecName "kube-api-access-hvgr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.011501 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54674fd4-5080-4cea-8cf9-7c6bbd9c53de" (UID: "54674fd4-5080-4cea-8cf9-7c6bbd9c53de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.035363 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7bsw6"] Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.043455 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory" (OuterVolumeSpecName: "inventory") pod "54674fd4-5080-4cea-8cf9-7c6bbd9c53de" (UID: "54674fd4-5080-4cea-8cf9-7c6bbd9c53de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.043823 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7bsw6"] Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.077311 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.077478 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvgr4\" (UniqueName: \"kubernetes.io/projected/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-kube-api-access-hvgr4\") on node \"crc\" DevicePath \"\"" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.077538 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54674fd4-5080-4cea-8cf9-7c6bbd9c53de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.280249 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" event={"ID":"54674fd4-5080-4cea-8cf9-7c6bbd9c53de","Type":"ContainerDied","Data":"0e2dbc4a0a6dfb8425d99f154c94e490f5a598ea4bcf6c7888f37a64680b35c4"} Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.280697 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2dbc4a0a6dfb8425d99f154c94e490f5a598ea4bcf6c7888f37a64680b35c4" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.280724 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.385793 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7"] Feb 04 09:12:55 crc kubenswrapper[4644]: E0204 09:12:55.386257 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54674fd4-5080-4cea-8cf9-7c6bbd9c53de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.386282 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="54674fd4-5080-4cea-8cf9-7c6bbd9c53de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.386549 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="54674fd4-5080-4cea-8cf9-7c6bbd9c53de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.387370 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.389592 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.389842 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.390290 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.390509 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.401577 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7"] Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.483298 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.483444 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbv2\" (UniqueName: \"kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.483483 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.584817 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.584955 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.585037 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgbv2\" (UniqueName: \"kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.588624 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.588631 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.603832 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgbv2\" (UniqueName: \"kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:55 crc kubenswrapper[4644]: I0204 09:12:55.705159 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:12:56 crc kubenswrapper[4644]: I0204 09:12:56.044626 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7"] Feb 04 09:12:56 crc kubenswrapper[4644]: I0204 09:12:56.290596 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" event={"ID":"42aaff39-4ff2-44b5-9770-56fc11241b30","Type":"ContainerStarted","Data":"23a65a8e0bc83ae34bbd150b895e3647af0bcd054f47e69adbc823ee45a23e7f"} Feb 04 09:12:56 crc kubenswrapper[4644]: I0204 09:12:56.675802 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a5663f-2ce9-417b-a359-5db9a580628b" path="/var/lib/kubelet/pods/e4a5663f-2ce9-417b-a359-5db9a580628b/volumes" Feb 04 09:12:59 crc kubenswrapper[4644]: I0204 09:12:59.326296 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" event={"ID":"42aaff39-4ff2-44b5-9770-56fc11241b30","Type":"ContainerStarted","Data":"72120de5fe88fc7d8527097c85b679620dc69e37326d972f6dbf966722f3213f"} Feb 04 09:12:59 crc kubenswrapper[4644]: I0204 09:12:59.375962 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" podStartSLOduration=1.956231427 podStartE2EDuration="4.375936023s" podCreationTimestamp="2026-02-04 09:12:55 +0000 UTC" firstStartedPulling="2026-02-04 09:12:56.053734339 +0000 UTC m=+1886.093792094" lastFinishedPulling="2026-02-04 09:12:58.473438895 +0000 UTC m=+1888.513496690" observedRunningTime="2026-02-04 09:12:59.361650187 +0000 UTC m=+1889.401707982" watchObservedRunningTime="2026-02-04 09:12:59.375936023 +0000 UTC m=+1889.415993798" Feb 04 09:13:00 crc kubenswrapper[4644]: I0204 09:13:00.676523 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:13:00 crc kubenswrapper[4644]: E0204 09:13:00.677082 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:13:01 crc kubenswrapper[4644]: I0204 09:13:01.611636 4644 scope.go:117] "RemoveContainer" containerID="bcf7faab1bdbfb04f06591d5dd695e11804aa4517d2e069a4859ccebb4ecd31e" Feb 04 09:13:01 crc kubenswrapper[4644]: I0204 09:13:01.668283 4644 scope.go:117] "RemoveContainer" containerID="bbb89581a7f440c7bb50098d9840fadf2d2ab7492b02b8d833e0634207094cbd" Feb 04 09:13:01 crc kubenswrapper[4644]: I0204 09:13:01.718586 4644 scope.go:117] "RemoveContainer" containerID="75705b14068c1ae7b352ef3819097dadf96bb7380f42402734539b03018e6b4b" Feb 04 09:13:02 crc kubenswrapper[4644]: I0204 09:13:02.149245 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:13:02 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:13:02 crc kubenswrapper[4644]: > Feb 04 09:13:10 crc kubenswrapper[4644]: I0204 09:13:10.431162 4644 generic.go:334] "Generic (PLEG): container finished" podID="42aaff39-4ff2-44b5-9770-56fc11241b30" containerID="72120de5fe88fc7d8527097c85b679620dc69e37326d972f6dbf966722f3213f" exitCode=0 Feb 04 09:13:10 crc kubenswrapper[4644]: I0204 09:13:10.431257 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" event={"ID":"42aaff39-4ff2-44b5-9770-56fc11241b30","Type":"ContainerDied","Data":"72120de5fe88fc7d8527097c85b679620dc69e37326d972f6dbf966722f3213f"} Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.663035 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.884231 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.904412 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory\") pod \"42aaff39-4ff2-44b5-9770-56fc11241b30\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.904712 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgbv2\" (UniqueName: \"kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2\") pod \"42aaff39-4ff2-44b5-9770-56fc11241b30\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.905007 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam\") pod \"42aaff39-4ff2-44b5-9770-56fc11241b30\" (UID: \"42aaff39-4ff2-44b5-9770-56fc11241b30\") " Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.938519 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2" (OuterVolumeSpecName: "kube-api-access-qgbv2") pod "42aaff39-4ff2-44b5-9770-56fc11241b30" (UID: "42aaff39-4ff2-44b5-9770-56fc11241b30"). InnerVolumeSpecName "kube-api-access-qgbv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.976698 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory" (OuterVolumeSpecName: "inventory") pod "42aaff39-4ff2-44b5-9770-56fc11241b30" (UID: "42aaff39-4ff2-44b5-9770-56fc11241b30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:13:11 crc kubenswrapper[4644]: I0204 09:13:11.983478 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "42aaff39-4ff2-44b5-9770-56fc11241b30" (UID: "42aaff39-4ff2-44b5-9770-56fc11241b30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.007086 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.007123 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgbv2\" (UniqueName: \"kubernetes.io/projected/42aaff39-4ff2-44b5-9770-56fc11241b30-kube-api-access-qgbv2\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.007139 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42aaff39-4ff2-44b5-9770-56fc11241b30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.149585 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" probeResult="failure" output=< Feb 04 09:13:12 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:13:12 crc kubenswrapper[4644]: > Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.457256 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840"} Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.461935 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" event={"ID":"42aaff39-4ff2-44b5-9770-56fc11241b30","Type":"ContainerDied","Data":"23a65a8e0bc83ae34bbd150b895e3647af0bcd054f47e69adbc823ee45a23e7f"} Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.461979 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a65a8e0bc83ae34bbd150b895e3647af0bcd054f47e69adbc823ee45a23e7f" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.462046 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.570456 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj"] Feb 04 09:13:12 crc kubenswrapper[4644]: E0204 09:13:12.570841 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42aaff39-4ff2-44b5-9770-56fc11241b30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.570857 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="42aaff39-4ff2-44b5-9770-56fc11241b30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.571027 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="42aaff39-4ff2-44b5-9770-56fc11241b30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.571616 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.576826 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.577038 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.577171 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.577610 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.596267 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj"] Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.619655 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmkw\" (UniqueName: \"kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.619701 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.619961 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.721484 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmkw\" (UniqueName: \"kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.721542 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.721663 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.726753 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.731266 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.741087 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmkw\" (UniqueName: \"kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bzzbj\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:12 crc kubenswrapper[4644]: I0204 09:13:12.892615 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:13:13 crc kubenswrapper[4644]: I0204 09:13:13.444694 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj"] Feb 04 09:13:13 crc kubenswrapper[4644]: I0204 09:13:13.470798 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" event={"ID":"110ef1d0-ffbc-4356-9c1f-169889312eef","Type":"ContainerStarted","Data":"20bc1147f2fb6becfa4f0d68dff53a6ae39c3a4b883d913c3c76441c8ff2504b"} Feb 04 09:13:17 crc kubenswrapper[4644]: I0204 09:13:17.505238 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" event={"ID":"110ef1d0-ffbc-4356-9c1f-169889312eef","Type":"ContainerStarted","Data":"f1c35bc0d2d74e58b96985ea6ee7270f0008f2c7e3cc401e83b2db7e933ec217"} Feb 04 09:13:17 crc kubenswrapper[4644]: I0204 09:13:17.528006 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" podStartSLOduration=2.4702694640000002 podStartE2EDuration="5.527980571s" podCreationTimestamp="2026-02-04 09:13:12 +0000 UTC" firstStartedPulling="2026-02-04 09:13:13.456585073 +0000 UTC m=+1903.496642828" lastFinishedPulling="2026-02-04 09:13:16.51429618 +0000 UTC m=+1906.554353935" observedRunningTime="2026-02-04 09:13:17.521062485 +0000 UTC m=+1907.561120250" watchObservedRunningTime="2026-02-04 09:13:17.527980571 +0000 UTC m=+1907.568038326" Feb 04 09:13:21 crc kubenswrapper[4644]: I0204 09:13:21.165910 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:13:21 crc kubenswrapper[4644]: I0204 09:13:21.227776 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:13:22 crc kubenswrapper[4644]: I0204 09:13:22.024163 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:13:22 crc kubenswrapper[4644]: I0204 09:13:22.542384 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzknw" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" containerID="cri-o://b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e" gracePeriod=2 Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.126968 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.286595 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities\") pod \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.286780 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content\") pod \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.286834 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ntz9\" (UniqueName: \"kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9\") pod \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\" (UID: \"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36\") " Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.287624 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities" (OuterVolumeSpecName: "utilities") pod "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" (UID: "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.305501 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9" (OuterVolumeSpecName: "kube-api-access-4ntz9") pod "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" (UID: "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36"). InnerVolumeSpecName "kube-api-access-4ntz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.389619 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.389656 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ntz9\" (UniqueName: \"kubernetes.io/projected/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-kube-api-access-4ntz9\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.517006 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" (UID: "ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.576697 4644 generic.go:334] "Generic (PLEG): container finished" podID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerID="b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e" exitCode=0 Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.576752 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerDied","Data":"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e"} Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.576785 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzknw" event={"ID":"ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36","Type":"ContainerDied","Data":"12fb2df41be9145da7410bda1238cec2ae546bae3863d26c4fefb298c4bb14b9"} Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.576781 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzknw" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.576801 4644 scope.go:117] "RemoveContainer" containerID="b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.599216 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.611093 4644 scope.go:117] "RemoveContainer" containerID="87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.628728 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.650043 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzknw"] Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.664609 4644 scope.go:117] "RemoveContainer" containerID="8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.705200 4644 scope.go:117] "RemoveContainer" containerID="b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e" Feb 04 09:13:23 crc kubenswrapper[4644]: E0204 09:13:23.706284 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e\": container with ID starting with b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e not found: ID does not exist" containerID="b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.706317 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e"} err="failed to get container status \"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e\": rpc error: code = NotFound desc = could not find container \"b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e\": container with ID starting with b00f3b8e61629ca396a99c74a2c2b1a50db15c59b4d0814091f0064431c3d01e not found: ID does not exist" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.706355 4644 scope.go:117] "RemoveContainer" containerID="87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8" Feb 04 09:13:23 crc kubenswrapper[4644]: E0204 09:13:23.706650 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8\": container with ID starting with 87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8 not found: ID does not exist" containerID="87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.706678 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8"} err="failed to get container status \"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8\": rpc error: code = NotFound desc = could not find container \"87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8\": container with ID starting with 87d78f8b8084f5f2b68fc3d4a621e07d92832b928f29bf5c451f0b5b00fdece8 not found: ID does not exist" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.706696 4644 scope.go:117] "RemoveContainer" containerID="8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1" Feb 04 09:13:23 crc kubenswrapper[4644]: E0204 09:13:23.707116 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1\": container with ID starting with 8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1 not found: ID does not exist" containerID="8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1" Feb 04 09:13:23 crc kubenswrapper[4644]: I0204 09:13:23.707139 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1"} err="failed to get container status \"8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1\": rpc error: code = NotFound desc = could not find container \"8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1\": container with ID starting with 8de0ee36ba3ac1140ed827979929b837f4092126a21902f648eccb5dd6130af1 not found: ID does not exist" Feb 04 09:13:24 crc kubenswrapper[4644]: I0204 09:13:24.670438 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" path="/var/lib/kubelet/pods/ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36/volumes" Feb 04 09:13:38 crc kubenswrapper[4644]: I0204 09:13:38.057258 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jdgt7"] Feb 04 09:13:38 crc kubenswrapper[4644]: I0204 09:13:38.066127 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jdgt7"] Feb 04 09:13:38 crc kubenswrapper[4644]: I0204 09:13:38.670559 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5c3a3a-8314-4a01-b164-038d9247e569" path="/var/lib/kubelet/pods/0c5c3a3a-8314-4a01-b164-038d9247e569/volumes" Feb 04 09:14:01 crc kubenswrapper[4644]: I0204 09:14:01.856642 4644 scope.go:117] "RemoveContainer" containerID="2fe3556f7a3a4789fe708c36e64cfd33808338b807d85ecb8e8d3fb7a06a9067" Feb 04 09:14:03 crc kubenswrapper[4644]: I0204 09:14:03.933154 4644 generic.go:334] "Generic (PLEG): container finished" podID="110ef1d0-ffbc-4356-9c1f-169889312eef" containerID="f1c35bc0d2d74e58b96985ea6ee7270f0008f2c7e3cc401e83b2db7e933ec217" exitCode=0 Feb 04 09:14:03 crc kubenswrapper[4644]: I0204 09:14:03.933237 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" event={"ID":"110ef1d0-ffbc-4356-9c1f-169889312eef","Type":"ContainerDied","Data":"f1c35bc0d2d74e58b96985ea6ee7270f0008f2c7e3cc401e83b2db7e933ec217"} Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.318815 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.452056 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam\") pod \"110ef1d0-ffbc-4356-9c1f-169889312eef\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.452157 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnmkw\" (UniqueName: \"kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw\") pod \"110ef1d0-ffbc-4356-9c1f-169889312eef\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.452289 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory\") pod \"110ef1d0-ffbc-4356-9c1f-169889312eef\" (UID: \"110ef1d0-ffbc-4356-9c1f-169889312eef\") " Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.458695 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw" (OuterVolumeSpecName: "kube-api-access-gnmkw") pod "110ef1d0-ffbc-4356-9c1f-169889312eef" (UID: "110ef1d0-ffbc-4356-9c1f-169889312eef"). InnerVolumeSpecName "kube-api-access-gnmkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.482097 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "110ef1d0-ffbc-4356-9c1f-169889312eef" (UID: "110ef1d0-ffbc-4356-9c1f-169889312eef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.487476 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory" (OuterVolumeSpecName: "inventory") pod "110ef1d0-ffbc-4356-9c1f-169889312eef" (UID: "110ef1d0-ffbc-4356-9c1f-169889312eef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.555828 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.555882 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnmkw\" (UniqueName: \"kubernetes.io/projected/110ef1d0-ffbc-4356-9c1f-169889312eef-kube-api-access-gnmkw\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.555901 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/110ef1d0-ffbc-4356-9c1f-169889312eef-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.953114 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" event={"ID":"110ef1d0-ffbc-4356-9c1f-169889312eef","Type":"ContainerDied","Data":"20bc1147f2fb6becfa4f0d68dff53a6ae39c3a4b883d913c3c76441c8ff2504b"} Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.953163 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bc1147f2fb6becfa4f0d68dff53a6ae39c3a4b883d913c3c76441c8ff2504b" Feb 04 09:14:05 crc kubenswrapper[4644]: I0204 09:14:05.953740 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bzzbj" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.057016 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z"] Feb 04 09:14:06 crc kubenswrapper[4644]: E0204 09:14:06.057387 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="extract-content" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.057402 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="extract-content" Feb 04 09:14:06 crc kubenswrapper[4644]: E0204 09:14:06.057427 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.057432 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" Feb 04 09:14:06 crc kubenswrapper[4644]: E0204 09:14:06.057442 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="extract-utilities" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.057448 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="extract-utilities" Feb 04 09:14:06 crc kubenswrapper[4644]: E0204 09:14:06.057469 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110ef1d0-ffbc-4356-9c1f-169889312eef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.057476 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="110ef1d0-ffbc-4356-9c1f-169889312eef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.058627 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1bbb7b-939c-4cf1-bfd8-44f3a008aa36" containerName="registry-server" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.058654 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="110ef1d0-ffbc-4356-9c1f-169889312eef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.059248 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.061412 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.061553 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.061643 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.061935 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.076728 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z"] Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.167122 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.167199 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.167274 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczhd\" (UniqueName: \"kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.268877 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.268959 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.269033 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczhd\" (UniqueName: \"kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.273090 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.283940 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.286364 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczhd\" (UniqueName: \"kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.378290 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.906958 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z"] Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.908481 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:14:06 crc kubenswrapper[4644]: I0204 09:14:06.960903 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" event={"ID":"10140326-561b-48b8-8746-576a83f36c12","Type":"ContainerStarted","Data":"12e1701edce7dce12f3e43e078dbde5344a399774b83e1ef1a6c6a49d96b6124"} Feb 04 09:14:07 crc kubenswrapper[4644]: I0204 09:14:07.979974 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" event={"ID":"10140326-561b-48b8-8746-576a83f36c12","Type":"ContainerStarted","Data":"656489f0b2d4f3966e177add5034c868110d099f47bd1d3e5f4f8ab04e38f527"} Feb 04 09:14:08 crc kubenswrapper[4644]: I0204 09:14:08.008701 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" podStartSLOduration=1.342091716 podStartE2EDuration="2.008676008s" podCreationTimestamp="2026-02-04 09:14:06 +0000 UTC" firstStartedPulling="2026-02-04 09:14:06.908200291 +0000 UTC m=+1956.948258046" lastFinishedPulling="2026-02-04 09:14:07.574784583 +0000 UTC m=+1957.614842338" observedRunningTime="2026-02-04 09:14:08.003975432 +0000 UTC m=+1958.044033187" watchObservedRunningTime="2026-02-04 09:14:08.008676008 +0000 UTC m=+1958.048733763" Feb 04 09:14:56 crc kubenswrapper[4644]: I0204 09:14:56.388561 4644 generic.go:334] "Generic (PLEG): container finished" podID="10140326-561b-48b8-8746-576a83f36c12" containerID="656489f0b2d4f3966e177add5034c868110d099f47bd1d3e5f4f8ab04e38f527" exitCode=0 Feb 04 09:14:56 crc kubenswrapper[4644]: I0204 09:14:56.388787 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" event={"ID":"10140326-561b-48b8-8746-576a83f36c12","Type":"ContainerDied","Data":"656489f0b2d4f3966e177add5034c868110d099f47bd1d3e5f4f8ab04e38f527"} Feb 04 09:14:57 crc kubenswrapper[4644]: I0204 09:14:57.912598 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.114609 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam\") pod \"10140326-561b-48b8-8746-576a83f36c12\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.114657 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory\") pod \"10140326-561b-48b8-8746-576a83f36c12\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.114797 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczhd\" (UniqueName: \"kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd\") pod \"10140326-561b-48b8-8746-576a83f36c12\" (UID: \"10140326-561b-48b8-8746-576a83f36c12\") " Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.120996 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd" (OuterVolumeSpecName: "kube-api-access-gczhd") pod "10140326-561b-48b8-8746-576a83f36c12" (UID: "10140326-561b-48b8-8746-576a83f36c12"). InnerVolumeSpecName "kube-api-access-gczhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.146714 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10140326-561b-48b8-8746-576a83f36c12" (UID: "10140326-561b-48b8-8746-576a83f36c12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.155030 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory" (OuterVolumeSpecName: "inventory") pod "10140326-561b-48b8-8746-576a83f36c12" (UID: "10140326-561b-48b8-8746-576a83f36c12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.216970 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.217005 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10140326-561b-48b8-8746-576a83f36c12-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.217018 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczhd\" (UniqueName: \"kubernetes.io/projected/10140326-561b-48b8-8746-576a83f36c12-kube-api-access-gczhd\") on node \"crc\" DevicePath \"\"" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.407680 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" event={"ID":"10140326-561b-48b8-8746-576a83f36c12","Type":"ContainerDied","Data":"12e1701edce7dce12f3e43e078dbde5344a399774b83e1ef1a6c6a49d96b6124"} Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.408013 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e1701edce7dce12f3e43e078dbde5344a399774b83e1ef1a6c6a49d96b6124" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.407782 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.511004 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9vf66"] Feb 04 09:14:58 crc kubenswrapper[4644]: E0204 09:14:58.511460 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10140326-561b-48b8-8746-576a83f36c12" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.511483 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="10140326-561b-48b8-8746-576a83f36c12" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.511724 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="10140326-561b-48b8-8746-576a83f36c12" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.513219 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.515229 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.516152 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.518117 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.518147 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.521676 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.521762 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.521829 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks2vw\" (UniqueName: \"kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.528934 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9vf66"] Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.623500 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.624829 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.624915 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks2vw\" (UniqueName: \"kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.647797 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.652074 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks2vw\" (UniqueName: \"kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.653956 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9vf66\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:58 crc kubenswrapper[4644]: I0204 09:14:58.831479 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:14:59 crc kubenswrapper[4644]: I0204 09:14:59.215237 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9vf66"] Feb 04 09:14:59 crc kubenswrapper[4644]: I0204 09:14:59.419287 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" event={"ID":"201d72e8-4479-464a-949d-53be692f0f9e","Type":"ContainerStarted","Data":"ebbe57e3f96704e624cae122984ec5e7cabefd4ff9a95c555ba9701224c6abd5"} Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.140725 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q"] Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.142261 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.145254 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.146823 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.165685 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q"] Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.253909 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.254017 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.254103 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztk6\" (UniqueName: \"kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.356236 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.356377 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztk6\" (UniqueName: \"kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.356476 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.357594 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.364912 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.373801 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztk6\" (UniqueName: \"kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6\") pod \"collect-profiles-29503275-74j5q\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.442089 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" event={"ID":"201d72e8-4479-464a-949d-53be692f0f9e","Type":"ContainerStarted","Data":"60a61984facacf94ad08ea114639da14e4aae92f8d235f74d11fc7cf17fd04bc"} Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.462780 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.468305 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" podStartSLOduration=1.7575887730000002 podStartE2EDuration="2.468284407s" podCreationTimestamp="2026-02-04 09:14:58 +0000 UTC" firstStartedPulling="2026-02-04 09:14:59.218820013 +0000 UTC m=+2009.258877768" lastFinishedPulling="2026-02-04 09:14:59.929515647 +0000 UTC m=+2009.969573402" observedRunningTime="2026-02-04 09:15:00.458638506 +0000 UTC m=+2010.498696261" watchObservedRunningTime="2026-02-04 09:15:00.468284407 +0000 UTC m=+2010.508342162" Feb 04 09:15:00 crc kubenswrapper[4644]: W0204 09:15:00.902368 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6e11cc_d0fb_4530_a876_ac4ce91abe97.slice/crio-558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b WatchSource:0}: Error finding container 558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b: Status 404 returned error can't find the container with id 558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b Feb 04 09:15:00 crc kubenswrapper[4644]: I0204 09:15:00.918127 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q"] Feb 04 09:15:01 crc kubenswrapper[4644]: I0204 09:15:01.451294 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" event={"ID":"7a6e11cc-d0fb-4530-a876-ac4ce91abe97","Type":"ContainerStarted","Data":"0ad5135fa6149d5f538f7a5743edd09defd33af817013785f7a9f2c12fe55b7c"} Feb 04 09:15:01 crc kubenswrapper[4644]: I0204 09:15:01.451651 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" event={"ID":"7a6e11cc-d0fb-4530-a876-ac4ce91abe97","Type":"ContainerStarted","Data":"558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b"} Feb 04 09:15:01 crc kubenswrapper[4644]: I0204 09:15:01.474071 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" podStartSLOduration=1.474047905 podStartE2EDuration="1.474047905s" podCreationTimestamp="2026-02-04 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:15:01.468190347 +0000 UTC m=+2011.508248122" watchObservedRunningTime="2026-02-04 09:15:01.474047905 +0000 UTC m=+2011.514105660" Feb 04 09:15:02 crc kubenswrapper[4644]: I0204 09:15:02.481158 4644 generic.go:334] "Generic (PLEG): container finished" podID="7a6e11cc-d0fb-4530-a876-ac4ce91abe97" containerID="0ad5135fa6149d5f538f7a5743edd09defd33af817013785f7a9f2c12fe55b7c" exitCode=0 Feb 04 09:15:02 crc kubenswrapper[4644]: I0204 09:15:02.481508 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" event={"ID":"7a6e11cc-d0fb-4530-a876-ac4ce91abe97","Type":"ContainerDied","Data":"0ad5135fa6149d5f538f7a5743edd09defd33af817013785f7a9f2c12fe55b7c"} Feb 04 09:15:03 crc kubenswrapper[4644]: I0204 09:15:03.821914 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:03 crc kubenswrapper[4644]: I0204 09:15:03.995201 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztk6\" (UniqueName: \"kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6\") pod \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " Feb 04 09:15:03 crc kubenswrapper[4644]: I0204 09:15:03.995257 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume\") pod \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " Feb 04 09:15:03 crc kubenswrapper[4644]: I0204 09:15:03.995423 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume\") pod \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\" (UID: \"7a6e11cc-d0fb-4530-a876-ac4ce91abe97\") " Feb 04 09:15:03 crc kubenswrapper[4644]: I0204 09:15:03.996238 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a6e11cc-d0fb-4530-a876-ac4ce91abe97" (UID: "7a6e11cc-d0fb-4530-a876-ac4ce91abe97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.000752 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6" (OuterVolumeSpecName: "kube-api-access-9ztk6") pod "7a6e11cc-d0fb-4530-a876-ac4ce91abe97" (UID: "7a6e11cc-d0fb-4530-a876-ac4ce91abe97"). InnerVolumeSpecName "kube-api-access-9ztk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.006773 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a6e11cc-d0fb-4530-a876-ac4ce91abe97" (UID: "7a6e11cc-d0fb-4530-a876-ac4ce91abe97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.097812 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztk6\" (UniqueName: \"kubernetes.io/projected/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-kube-api-access-9ztk6\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.097865 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.097879 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6e11cc-d0fb-4530-a876-ac4ce91abe97-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.501441 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" event={"ID":"7a6e11cc-d0fb-4530-a876-ac4ce91abe97","Type":"ContainerDied","Data":"558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b"} Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.501949 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558f9ca7757f3223449d41730c244c9f516ad40a4e4d84d10d40cbd347ed414b" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.501623 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q" Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.566127 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf"] Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.574445 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503230-44vlf"] Feb 04 09:15:04 crc kubenswrapper[4644]: I0204 09:15:04.668737 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2f8af2-85ef-4b5d-a95a-194c6a05a501" path="/var/lib/kubelet/pods/aa2f8af2-85ef-4b5d-a95a-194c6a05a501/volumes" Feb 04 09:15:07 crc kubenswrapper[4644]: I0204 09:15:07.529364 4644 generic.go:334] "Generic (PLEG): container finished" podID="201d72e8-4479-464a-949d-53be692f0f9e" containerID="60a61984facacf94ad08ea114639da14e4aae92f8d235f74d11fc7cf17fd04bc" exitCode=0 Feb 04 09:15:07 crc kubenswrapper[4644]: I0204 09:15:07.529442 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" event={"ID":"201d72e8-4479-464a-949d-53be692f0f9e","Type":"ContainerDied","Data":"60a61984facacf94ad08ea114639da14e4aae92f8d235f74d11fc7cf17fd04bc"} Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.248595 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.396660 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam\") pod \"201d72e8-4479-464a-949d-53be692f0f9e\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.397018 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0\") pod \"201d72e8-4479-464a-949d-53be692f0f9e\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.397187 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks2vw\" (UniqueName: \"kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw\") pod \"201d72e8-4479-464a-949d-53be692f0f9e\" (UID: \"201d72e8-4479-464a-949d-53be692f0f9e\") " Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.403685 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw" (OuterVolumeSpecName: "kube-api-access-ks2vw") pod "201d72e8-4479-464a-949d-53be692f0f9e" (UID: "201d72e8-4479-464a-949d-53be692f0f9e"). InnerVolumeSpecName "kube-api-access-ks2vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.424280 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "201d72e8-4479-464a-949d-53be692f0f9e" (UID: "201d72e8-4479-464a-949d-53be692f0f9e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.432835 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "201d72e8-4479-464a-949d-53be692f0f9e" (UID: "201d72e8-4479-464a-949d-53be692f0f9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.499483 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.499790 4644 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/201d72e8-4479-464a-949d-53be692f0f9e-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.499803 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks2vw\" (UniqueName: \"kubernetes.io/projected/201d72e8-4479-464a-949d-53be692f0f9e-kube-api-access-ks2vw\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.554137 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" event={"ID":"201d72e8-4479-464a-949d-53be692f0f9e","Type":"ContainerDied","Data":"ebbe57e3f96704e624cae122984ec5e7cabefd4ff9a95c555ba9701224c6abd5"} Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.554183 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebbe57e3f96704e624cae122984ec5e7cabefd4ff9a95c555ba9701224c6abd5" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.554214 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9vf66" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.642885 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp"] Feb 04 09:15:09 crc kubenswrapper[4644]: E0204 09:15:09.643319 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201d72e8-4479-464a-949d-53be692f0f9e" containerName="ssh-known-hosts-edpm-deployment" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.643356 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="201d72e8-4479-464a-949d-53be692f0f9e" containerName="ssh-known-hosts-edpm-deployment" Feb 04 09:15:09 crc kubenswrapper[4644]: E0204 09:15:09.643402 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6e11cc-d0fb-4530-a876-ac4ce91abe97" containerName="collect-profiles" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.643411 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6e11cc-d0fb-4530-a876-ac4ce91abe97" containerName="collect-profiles" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.643644 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="201d72e8-4479-464a-949d-53be692f0f9e" containerName="ssh-known-hosts-edpm-deployment" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.643669 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6e11cc-d0fb-4530-a876-ac4ce91abe97" containerName="collect-profiles" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.644420 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.650970 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.651463 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.651739 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.652845 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.653088 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp"] Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.805870 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8k7\" (UniqueName: \"kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.806130 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.806176 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.908664 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8k7\" (UniqueName: \"kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.908767 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.908798 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.915636 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.923003 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.932982 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8k7\" (UniqueName: \"kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5gqkp\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:09 crc kubenswrapper[4644]: I0204 09:15:09.964237 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:10 crc kubenswrapper[4644]: I0204 09:15:10.501255 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp"] Feb 04 09:15:10 crc kubenswrapper[4644]: I0204 09:15:10.563293 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" event={"ID":"bf71221b-6b1b-4245-b080-346ef3c46902","Type":"ContainerStarted","Data":"29c952c42ca33155637ac3be90a2016c482dfb48c9bfba00f5a4c6aa88509668"} Feb 04 09:15:11 crc kubenswrapper[4644]: I0204 09:15:11.573472 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" event={"ID":"bf71221b-6b1b-4245-b080-346ef3c46902","Type":"ContainerStarted","Data":"0137ae05960540ae4d9acc7b4a58a73ec0a0098d482a59077ed63f39e9bb640f"} Feb 04 09:15:11 crc kubenswrapper[4644]: I0204 09:15:11.596014 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" podStartSLOduration=2.139893267 podStartE2EDuration="2.595994542s" podCreationTimestamp="2026-02-04 09:15:09 +0000 UTC" firstStartedPulling="2026-02-04 09:15:10.507682743 +0000 UTC m=+2020.547740488" lastFinishedPulling="2026-02-04 09:15:10.963784008 +0000 UTC m=+2021.003841763" observedRunningTime="2026-02-04 09:15:11.591138821 +0000 UTC m=+2021.631196576" watchObservedRunningTime="2026-02-04 09:15:11.595994542 +0000 UTC m=+2021.636052297" Feb 04 09:15:19 crc kubenswrapper[4644]: I0204 09:15:19.641108 4644 generic.go:334] "Generic (PLEG): container finished" podID="bf71221b-6b1b-4245-b080-346ef3c46902" containerID="0137ae05960540ae4d9acc7b4a58a73ec0a0098d482a59077ed63f39e9bb640f" exitCode=0 Feb 04 09:15:19 crc kubenswrapper[4644]: I0204 09:15:19.641201 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" event={"ID":"bf71221b-6b1b-4245-b080-346ef3c46902","Type":"ContainerDied","Data":"0137ae05960540ae4d9acc7b4a58a73ec0a0098d482a59077ed63f39e9bb640f"} Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.057144 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.224911 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam\") pod \"bf71221b-6b1b-4245-b080-346ef3c46902\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.225149 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory\") pod \"bf71221b-6b1b-4245-b080-346ef3c46902\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.225515 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8k7\" (UniqueName: \"kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7\") pod \"bf71221b-6b1b-4245-b080-346ef3c46902\" (UID: \"bf71221b-6b1b-4245-b080-346ef3c46902\") " Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.242274 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7" (OuterVolumeSpecName: "kube-api-access-ng8k7") pod "bf71221b-6b1b-4245-b080-346ef3c46902" (UID: "bf71221b-6b1b-4245-b080-346ef3c46902"). InnerVolumeSpecName "kube-api-access-ng8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.260976 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory" (OuterVolumeSpecName: "inventory") pod "bf71221b-6b1b-4245-b080-346ef3c46902" (UID: "bf71221b-6b1b-4245-b080-346ef3c46902"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.266361 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf71221b-6b1b-4245-b080-346ef3c46902" (UID: "bf71221b-6b1b-4245-b080-346ef3c46902"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.327445 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8k7\" (UniqueName: \"kubernetes.io/projected/bf71221b-6b1b-4245-b080-346ef3c46902-kube-api-access-ng8k7\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.327735 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.327747 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf71221b-6b1b-4245-b080-346ef3c46902-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.661991 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" event={"ID":"bf71221b-6b1b-4245-b080-346ef3c46902","Type":"ContainerDied","Data":"29c952c42ca33155637ac3be90a2016c482dfb48c9bfba00f5a4c6aa88509668"} Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.662038 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c952c42ca33155637ac3be90a2016c482dfb48c9bfba00f5a4c6aa88509668" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.662089 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5gqkp" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.752453 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6"] Feb 04 09:15:21 crc kubenswrapper[4644]: E0204 09:15:21.752946 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf71221b-6b1b-4245-b080-346ef3c46902" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.752971 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf71221b-6b1b-4245-b080-346ef3c46902" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.753195 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf71221b-6b1b-4245-b080-346ef3c46902" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.754017 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.756083 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.756300 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.756473 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.767879 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.775516 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6"] Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.839185 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.839233 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtc2d\" (UniqueName: \"kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.839358 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.941127 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.941189 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtc2d\" (UniqueName: \"kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.941252 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.947349 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.948863 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:21 crc kubenswrapper[4644]: I0204 09:15:21.961393 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtc2d\" (UniqueName: \"kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:22 crc kubenswrapper[4644]: I0204 09:15:22.071660 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:22 crc kubenswrapper[4644]: I0204 09:15:22.609472 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6"] Feb 04 09:15:22 crc kubenswrapper[4644]: I0204 09:15:22.675468 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" event={"ID":"c4d8e999-5063-4f94-a049-6566ecee94fb","Type":"ContainerStarted","Data":"f2b426f2c18e768a5e3c4a5ba8f2d5a25edaa753162b4b1c283f3506938cbb19"} Feb 04 09:15:23 crc kubenswrapper[4644]: I0204 09:15:23.682221 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" event={"ID":"c4d8e999-5063-4f94-a049-6566ecee94fb","Type":"ContainerStarted","Data":"9a543e5410ddcbdf6cb6fb8ea7815263ebee2e1cc9200fd3b56f997282efa20d"} Feb 04 09:15:23 crc kubenswrapper[4644]: I0204 09:15:23.706318 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" podStartSLOduration=2.256302192 podStartE2EDuration="2.706298122s" podCreationTimestamp="2026-02-04 09:15:21 +0000 UTC" firstStartedPulling="2026-02-04 09:15:22.620725887 +0000 UTC m=+2032.660783642" lastFinishedPulling="2026-02-04 09:15:23.070721817 +0000 UTC m=+2033.110779572" observedRunningTime="2026-02-04 09:15:23.705768527 +0000 UTC m=+2033.745826292" watchObservedRunningTime="2026-02-04 09:15:23.706298122 +0000 UTC m=+2033.746355877" Feb 04 09:15:33 crc kubenswrapper[4644]: I0204 09:15:33.777098 4644 generic.go:334] "Generic (PLEG): container finished" podID="c4d8e999-5063-4f94-a049-6566ecee94fb" containerID="9a543e5410ddcbdf6cb6fb8ea7815263ebee2e1cc9200fd3b56f997282efa20d" exitCode=0 Feb 04 09:15:33 crc kubenswrapper[4644]: I0204 09:15:33.777187 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" event={"ID":"c4d8e999-5063-4f94-a049-6566ecee94fb","Type":"ContainerDied","Data":"9a543e5410ddcbdf6cb6fb8ea7815263ebee2e1cc9200fd3b56f997282efa20d"} Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.266247 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.418891 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam\") pod \"c4d8e999-5063-4f94-a049-6566ecee94fb\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.419049 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtc2d\" (UniqueName: \"kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d\") pod \"c4d8e999-5063-4f94-a049-6566ecee94fb\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.419140 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory\") pod \"c4d8e999-5063-4f94-a049-6566ecee94fb\" (UID: \"c4d8e999-5063-4f94-a049-6566ecee94fb\") " Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.425163 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d" (OuterVolumeSpecName: "kube-api-access-dtc2d") pod "c4d8e999-5063-4f94-a049-6566ecee94fb" (UID: "c4d8e999-5063-4f94-a049-6566ecee94fb"). InnerVolumeSpecName "kube-api-access-dtc2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.450404 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4d8e999-5063-4f94-a049-6566ecee94fb" (UID: "c4d8e999-5063-4f94-a049-6566ecee94fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.453574 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory" (OuterVolumeSpecName: "inventory") pod "c4d8e999-5063-4f94-a049-6566ecee94fb" (UID: "c4d8e999-5063-4f94-a049-6566ecee94fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.521213 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtc2d\" (UniqueName: \"kubernetes.io/projected/c4d8e999-5063-4f94-a049-6566ecee94fb-kube-api-access-dtc2d\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.521269 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.521283 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4d8e999-5063-4f94-a049-6566ecee94fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.554978 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.555032 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.796114 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" event={"ID":"c4d8e999-5063-4f94-a049-6566ecee94fb","Type":"ContainerDied","Data":"f2b426f2c18e768a5e3c4a5ba8f2d5a25edaa753162b4b1c283f3506938cbb19"} Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.796155 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b426f2c18e768a5e3c4a5ba8f2d5a25edaa753162b4b1c283f3506938cbb19" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.796216 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.909625 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr"] Feb 04 09:15:35 crc kubenswrapper[4644]: E0204 09:15:35.910482 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d8e999-5063-4f94-a049-6566ecee94fb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.910585 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d8e999-5063-4f94-a049-6566ecee94fb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.910933 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d8e999-5063-4f94-a049-6566ecee94fb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.912023 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.916652 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.917689 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.919430 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.919637 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.918318 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.918395 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.919373 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.920141 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 04 09:15:35 crc kubenswrapper[4644]: I0204 09:15:35.942614 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr"] Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028812 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028867 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028890 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028910 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028955 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.028981 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029000 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029017 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029040 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029065 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029091 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029113 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029133 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvqn\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.029202 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.131116 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.131532 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.131699 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.131915 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132103 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132211 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132311 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132421 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132551 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132684 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132819 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.132952 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.133266 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvqn\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.133624 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.137205 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.137758 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.139369 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.141167 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.142452 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.144109 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.144701 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.145715 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.147797 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.150667 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.152577 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.153642 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvqn\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.154280 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.154562 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s75nr\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.251172 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.771083 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr"] Feb 04 09:15:36 crc kubenswrapper[4644]: I0204 09:15:36.808021 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" event={"ID":"11062f5d-3dd1-4087-9ea2-1b32fee5526c","Type":"ContainerStarted","Data":"e743a484a1baf9504f34b71192a730b9fba1c73ee9ec03321e6d57e1f4e8e827"} Feb 04 09:15:37 crc kubenswrapper[4644]: I0204 09:15:37.820060 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" event={"ID":"11062f5d-3dd1-4087-9ea2-1b32fee5526c","Type":"ContainerStarted","Data":"bf84df0defeab444fd362341e44595ffc2d13faae806e06e140e78299734a6a7"} Feb 04 09:15:37 crc kubenswrapper[4644]: I0204 09:15:37.850148 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" podStartSLOduration=2.104167094 podStartE2EDuration="2.850124292s" podCreationTimestamp="2026-02-04 09:15:35 +0000 UTC" firstStartedPulling="2026-02-04 09:15:36.779398938 +0000 UTC m=+2046.819456693" lastFinishedPulling="2026-02-04 09:15:37.525356136 +0000 UTC m=+2047.565413891" observedRunningTime="2026-02-04 09:15:37.845303531 +0000 UTC m=+2047.885361286" watchObservedRunningTime="2026-02-04 09:15:37.850124292 +0000 UTC m=+2047.890182037" Feb 04 09:16:01 crc kubenswrapper[4644]: I0204 09:16:01.981294 4644 scope.go:117] "RemoveContainer" containerID="2f331a753711e61947e987b07ed7ded0d7923fca166fea98bcfcd90f713f104b" Feb 04 09:16:05 crc kubenswrapper[4644]: I0204 09:16:05.885155 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:16:05 crc kubenswrapper[4644]: I0204 09:16:05.885891 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:16:16 crc kubenswrapper[4644]: I0204 09:16:16.011692 4644 generic.go:334] "Generic (PLEG): container finished" podID="11062f5d-3dd1-4087-9ea2-1b32fee5526c" containerID="bf84df0defeab444fd362341e44595ffc2d13faae806e06e140e78299734a6a7" exitCode=0 Feb 04 09:16:16 crc kubenswrapper[4644]: I0204 09:16:16.011801 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" event={"ID":"11062f5d-3dd1-4087-9ea2-1b32fee5526c","Type":"ContainerDied","Data":"bf84df0defeab444fd362341e44595ffc2d13faae806e06e140e78299734a6a7"} Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.561192 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.645661 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.646158 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.646371 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.646645 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.646782 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.646978 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvqn\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647107 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647243 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647382 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647508 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647611 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647710 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647800 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.647900 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\" (UID: \"11062f5d-3dd1-4087-9ea2-1b32fee5526c\") " Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.652170 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.652466 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.652979 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.654155 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.654238 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.654438 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.656684 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.656887 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.657173 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.663801 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.664404 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.665607 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn" (OuterVolumeSpecName: "kube-api-access-skvqn") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "kube-api-access-skvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.685539 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.690355 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory" (OuterVolumeSpecName: "inventory") pod "11062f5d-3dd1-4087-9ea2-1b32fee5526c" (UID: "11062f5d-3dd1-4087-9ea2-1b32fee5526c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750355 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvqn\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-kube-api-access-skvqn\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750400 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750416 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750429 4644 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750443 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750457 4644 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750468 4644 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750480 4644 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750491 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750504 4644 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750516 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750528 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/11062f5d-3dd1-4087-9ea2-1b32fee5526c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750540 4644 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:17 crc kubenswrapper[4644]: I0204 09:16:17.750553 4644 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11062f5d-3dd1-4087-9ea2-1b32fee5526c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.028627 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" event={"ID":"11062f5d-3dd1-4087-9ea2-1b32fee5526c","Type":"ContainerDied","Data":"e743a484a1baf9504f34b71192a730b9fba1c73ee9ec03321e6d57e1f4e8e827"} Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.028683 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e743a484a1baf9504f34b71192a730b9fba1c73ee9ec03321e6d57e1f4e8e827" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.028795 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s75nr" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.162578 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5"] Feb 04 09:16:18 crc kubenswrapper[4644]: E0204 09:16:18.162945 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11062f5d-3dd1-4087-9ea2-1b32fee5526c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.162963 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="11062f5d-3dd1-4087-9ea2-1b32fee5526c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.163156 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="11062f5d-3dd1-4087-9ea2-1b32fee5526c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.163777 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.168869 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.169403 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.169715 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.169883 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.171042 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.185644 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5"] Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.261064 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.261134 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.261172 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.261235 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24tf\" (UniqueName: \"kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.261258 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.363555 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.363630 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.363689 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.363767 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24tf\" (UniqueName: \"kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.363794 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.365007 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.371222 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.379726 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.382973 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.396394 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24tf\" (UniqueName: \"kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4lkt5\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:18 crc kubenswrapper[4644]: I0204 09:16:18.512036 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:16:19 crc kubenswrapper[4644]: I0204 09:16:19.061297 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5"] Feb 04 09:16:20 crc kubenswrapper[4644]: I0204 09:16:20.054572 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" event={"ID":"409ea25f-f243-4e2e-811a-2e887aad6ab8","Type":"ContainerStarted","Data":"9a9331ea3fcffb09641b19c893364b6f90f6a82805e4b21aeee9947dcabf8c2e"} Feb 04 09:16:24 crc kubenswrapper[4644]: I0204 09:16:24.111882 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" event={"ID":"409ea25f-f243-4e2e-811a-2e887aad6ab8","Type":"ContainerStarted","Data":"459da5fef5c3a18d23f40d858f2d6d7f833b9460c580492f263f217f40fbb7db"} Feb 04 09:16:24 crc kubenswrapper[4644]: I0204 09:16:24.128733 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" podStartSLOduration=1.84208191 podStartE2EDuration="6.128717644s" podCreationTimestamp="2026-02-04 09:16:18 +0000 UTC" firstStartedPulling="2026-02-04 09:16:19.072235537 +0000 UTC m=+2089.112293292" lastFinishedPulling="2026-02-04 09:16:23.358871271 +0000 UTC m=+2093.398929026" observedRunningTime="2026-02-04 09:16:24.125560119 +0000 UTC m=+2094.165617874" watchObservedRunningTime="2026-02-04 09:16:24.128717644 +0000 UTC m=+2094.168775399" Feb 04 09:16:35 crc kubenswrapper[4644]: I0204 09:16:35.554849 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:16:35 crc kubenswrapper[4644]: I0204 09:16:35.555314 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:16:35 crc kubenswrapper[4644]: I0204 09:16:35.555386 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:16:35 crc kubenswrapper[4644]: I0204 09:16:35.556103 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:16:35 crc kubenswrapper[4644]: I0204 09:16:35.556146 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840" gracePeriod=600 Feb 04 09:16:36 crc kubenswrapper[4644]: I0204 09:16:36.231881 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840" exitCode=0 Feb 04 09:16:36 crc kubenswrapper[4644]: I0204 09:16:36.232004 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840"} Feb 04 09:16:36 crc kubenswrapper[4644]: I0204 09:16:36.232515 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb"} Feb 04 09:16:36 crc kubenswrapper[4644]: I0204 09:16:36.232554 4644 scope.go:117] "RemoveContainer" containerID="a15224bd5ee4afca0410e19edf5ad8bb1987def78650c79631918d1d02677982" Feb 04 09:17:30 crc kubenswrapper[4644]: I0204 09:17:30.753242 4644 generic.go:334] "Generic (PLEG): container finished" podID="409ea25f-f243-4e2e-811a-2e887aad6ab8" containerID="459da5fef5c3a18d23f40d858f2d6d7f833b9460c580492f263f217f40fbb7db" exitCode=0 Feb 04 09:17:30 crc kubenswrapper[4644]: I0204 09:17:30.753297 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" event={"ID":"409ea25f-f243-4e2e-811a-2e887aad6ab8","Type":"ContainerDied","Data":"459da5fef5c3a18d23f40d858f2d6d7f833b9460c580492f263f217f40fbb7db"} Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.301971 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.364312 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle\") pod \"409ea25f-f243-4e2e-811a-2e887aad6ab8\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.364392 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t24tf\" (UniqueName: \"kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf\") pod \"409ea25f-f243-4e2e-811a-2e887aad6ab8\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.364423 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam\") pod \"409ea25f-f243-4e2e-811a-2e887aad6ab8\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.364645 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory\") pod \"409ea25f-f243-4e2e-811a-2e887aad6ab8\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.364688 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0\") pod \"409ea25f-f243-4e2e-811a-2e887aad6ab8\" (UID: \"409ea25f-f243-4e2e-811a-2e887aad6ab8\") " Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.372354 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "409ea25f-f243-4e2e-811a-2e887aad6ab8" (UID: "409ea25f-f243-4e2e-811a-2e887aad6ab8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.373575 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf" (OuterVolumeSpecName: "kube-api-access-t24tf") pod "409ea25f-f243-4e2e-811a-2e887aad6ab8" (UID: "409ea25f-f243-4e2e-811a-2e887aad6ab8"). InnerVolumeSpecName "kube-api-access-t24tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.400630 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory" (OuterVolumeSpecName: "inventory") pod "409ea25f-f243-4e2e-811a-2e887aad6ab8" (UID: "409ea25f-f243-4e2e-811a-2e887aad6ab8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.403217 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "409ea25f-f243-4e2e-811a-2e887aad6ab8" (UID: "409ea25f-f243-4e2e-811a-2e887aad6ab8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.407258 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "409ea25f-f243-4e2e-811a-2e887aad6ab8" (UID: "409ea25f-f243-4e2e-811a-2e887aad6ab8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.466585 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.466617 4644 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.466629 4644 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.466637 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t24tf\" (UniqueName: \"kubernetes.io/projected/409ea25f-f243-4e2e-811a-2e887aad6ab8-kube-api-access-t24tf\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.466646 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/409ea25f-f243-4e2e-811a-2e887aad6ab8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.773919 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" event={"ID":"409ea25f-f243-4e2e-811a-2e887aad6ab8","Type":"ContainerDied","Data":"9a9331ea3fcffb09641b19c893364b6f90f6a82805e4b21aeee9947dcabf8c2e"} Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.773967 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9331ea3fcffb09641b19c893364b6f90f6a82805e4b21aeee9947dcabf8c2e" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.774031 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4lkt5" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.884897 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv"] Feb 04 09:17:32 crc kubenswrapper[4644]: E0204 09:17:32.885370 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409ea25f-f243-4e2e-811a-2e887aad6ab8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.885390 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="409ea25f-f243-4e2e-811a-2e887aad6ab8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.885639 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="409ea25f-f243-4e2e-811a-2e887aad6ab8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.886459 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.892458 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.892514 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.892523 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.894900 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.896183 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.897787 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.904720 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv"] Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.974527 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzl8\" (UniqueName: \"kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.974636 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.974674 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.974722 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.974872 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:32 crc kubenswrapper[4644]: I0204 09:17:32.975137 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.076783 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.077122 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.077291 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.077459 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.077603 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzl8\" (UniqueName: \"kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.077751 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.081843 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.081843 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.081919 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.083626 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.093895 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.104335 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzl8\" (UniqueName: \"kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.203565 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.714937 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv"] Feb 04 09:17:33 crc kubenswrapper[4644]: W0204 09:17:33.729766 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda0998d9_9cc2_4e46_ac4f_f47ec801a998.slice/crio-eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357 WatchSource:0}: Error finding container eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357: Status 404 returned error can't find the container with id eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357 Feb 04 09:17:33 crc kubenswrapper[4644]: I0204 09:17:33.785648 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" event={"ID":"da0998d9-9cc2-4e46-ac4f-f47ec801a998","Type":"ContainerStarted","Data":"eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357"} Feb 04 09:17:34 crc kubenswrapper[4644]: I0204 09:17:34.794675 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" event={"ID":"da0998d9-9cc2-4e46-ac4f-f47ec801a998","Type":"ContainerStarted","Data":"8b816530cdc7437f3318a1d46acc561398ceec346d687c35a8d240edbe995cbb"} Feb 04 09:17:34 crc kubenswrapper[4644]: I0204 09:17:34.818853 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" podStartSLOduration=2.289048004 podStartE2EDuration="2.818832995s" podCreationTimestamp="2026-02-04 09:17:32 +0000 UTC" firstStartedPulling="2026-02-04 09:17:33.732111488 +0000 UTC m=+2163.772169243" lastFinishedPulling="2026-02-04 09:17:34.261896479 +0000 UTC m=+2164.301954234" observedRunningTime="2026-02-04 09:17:34.809169133 +0000 UTC m=+2164.849226898" watchObservedRunningTime="2026-02-04 09:17:34.818832995 +0000 UTC m=+2164.858890750" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.171192 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7k59"] Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.175796 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.205478 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7k59"] Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.319191 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-utilities\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.319308 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhn4k\" (UniqueName: \"kubernetes.io/projected/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-kube-api-access-dhn4k\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.319430 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-catalog-content\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.421914 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhn4k\" (UniqueName: \"kubernetes.io/projected/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-kube-api-access-dhn4k\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.421978 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-catalog-content\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.422111 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-utilities\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.422635 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-utilities\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.422803 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-catalog-content\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.444036 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhn4k\" (UniqueName: \"kubernetes.io/projected/3b67abb6-99ac-49bb-8ec2-5a445e1c18e6-kube-api-access-dhn4k\") pod \"certified-operators-t7k59\" (UID: \"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6\") " pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:35 crc kubenswrapper[4644]: I0204 09:17:35.497442 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:36 crc kubenswrapper[4644]: I0204 09:17:36.120922 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7k59"] Feb 04 09:17:36 crc kubenswrapper[4644]: I0204 09:17:36.819988 4644 generic.go:334] "Generic (PLEG): container finished" podID="3b67abb6-99ac-49bb-8ec2-5a445e1c18e6" containerID="ed70941a7989b4ed15bb75af930352ab88df82ee0c47484fec02c2e7c2065cf6" exitCode=0 Feb 04 09:17:36 crc kubenswrapper[4644]: I0204 09:17:36.820082 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7k59" event={"ID":"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6","Type":"ContainerDied","Data":"ed70941a7989b4ed15bb75af930352ab88df82ee0c47484fec02c2e7c2065cf6"} Feb 04 09:17:36 crc kubenswrapper[4644]: I0204 09:17:36.820386 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7k59" event={"ID":"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6","Type":"ContainerStarted","Data":"c784fd5bcec5cb68f16923f18baddaa969b23d87534a4ffb7d854df02e14b45d"} Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.564206 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.566862 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.590634 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.675780 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.676145 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbcbh\" (UniqueName: \"kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.676359 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.778446 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.778582 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbcbh\" (UniqueName: \"kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.778658 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.779077 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.779131 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.799555 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbcbh\" (UniqueName: \"kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh\") pod \"redhat-marketplace-ppp2r\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:37 crc kubenswrapper[4644]: I0204 09:17:37.893483 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:38 crc kubenswrapper[4644]: I0204 09:17:38.492616 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:17:38 crc kubenswrapper[4644]: I0204 09:17:38.848011 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerStarted","Data":"60e83882a58d53dbdb6dfbf35243f73d85a8d6320f7c4a643a60d9ff2463b308"} Feb 04 09:17:39 crc kubenswrapper[4644]: I0204 09:17:39.861621 4644 generic.go:334] "Generic (PLEG): container finished" podID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerID="510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc" exitCode=0 Feb 04 09:17:39 crc kubenswrapper[4644]: I0204 09:17:39.861728 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerDied","Data":"510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc"} Feb 04 09:17:41 crc kubenswrapper[4644]: I0204 09:17:41.881946 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerStarted","Data":"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039"} Feb 04 09:17:45 crc kubenswrapper[4644]: I0204 09:17:45.916830 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7k59" event={"ID":"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6","Type":"ContainerStarted","Data":"a7ec5739e17bee8f0a6f0311fad6f0f8bf3d87555c84279d0edc34cf9916853a"} Feb 04 09:17:45 crc kubenswrapper[4644]: I0204 09:17:45.922725 4644 generic.go:334] "Generic (PLEG): container finished" podID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerID="fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039" exitCode=0 Feb 04 09:17:45 crc kubenswrapper[4644]: I0204 09:17:45.922823 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerDied","Data":"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039"} Feb 04 09:17:46 crc kubenswrapper[4644]: I0204 09:17:46.937895 4644 generic.go:334] "Generic (PLEG): container finished" podID="3b67abb6-99ac-49bb-8ec2-5a445e1c18e6" containerID="a7ec5739e17bee8f0a6f0311fad6f0f8bf3d87555c84279d0edc34cf9916853a" exitCode=0 Feb 04 09:17:46 crc kubenswrapper[4644]: I0204 09:17:46.938825 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7k59" event={"ID":"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6","Type":"ContainerDied","Data":"a7ec5739e17bee8f0a6f0311fad6f0f8bf3d87555c84279d0edc34cf9916853a"} Feb 04 09:17:47 crc kubenswrapper[4644]: I0204 09:17:47.950610 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerStarted","Data":"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714"} Feb 04 09:17:47 crc kubenswrapper[4644]: I0204 09:17:47.976164 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ppp2r" podStartSLOduration=3.970805752 podStartE2EDuration="10.976148229s" podCreationTimestamp="2026-02-04 09:17:37 +0000 UTC" firstStartedPulling="2026-02-04 09:17:39.863877058 +0000 UTC m=+2169.903934813" lastFinishedPulling="2026-02-04 09:17:46.869219515 +0000 UTC m=+2176.909277290" observedRunningTime="2026-02-04 09:17:47.974453584 +0000 UTC m=+2178.014511349" watchObservedRunningTime="2026-02-04 09:17:47.976148229 +0000 UTC m=+2178.016205974" Feb 04 09:17:48 crc kubenswrapper[4644]: I0204 09:17:48.960660 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7k59" event={"ID":"3b67abb6-99ac-49bb-8ec2-5a445e1c18e6","Type":"ContainerStarted","Data":"09e890ed6e3801a8f2c3b035e06e277be0da26f04c2d115f9829b8083f024846"} Feb 04 09:17:48 crc kubenswrapper[4644]: I0204 09:17:48.978731 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7k59" podStartSLOduration=2.9799274000000002 podStartE2EDuration="13.978711172s" podCreationTimestamp="2026-02-04 09:17:35 +0000 UTC" firstStartedPulling="2026-02-04 09:17:36.822789038 +0000 UTC m=+2166.862846793" lastFinishedPulling="2026-02-04 09:17:47.82157281 +0000 UTC m=+2177.861630565" observedRunningTime="2026-02-04 09:17:48.975722501 +0000 UTC m=+2179.015780256" watchObservedRunningTime="2026-02-04 09:17:48.978711172 +0000 UTC m=+2179.018768927" Feb 04 09:17:55 crc kubenswrapper[4644]: I0204 09:17:55.498049 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:55 crc kubenswrapper[4644]: I0204 09:17:55.498719 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:55 crc kubenswrapper[4644]: I0204 09:17:55.547683 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:56 crc kubenswrapper[4644]: I0204 09:17:56.071644 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7k59" Feb 04 09:17:56 crc kubenswrapper[4644]: I0204 09:17:56.469978 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7k59"] Feb 04 09:17:56 crc kubenswrapper[4644]: I0204 09:17:56.589563 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 09:17:56 crc kubenswrapper[4644]: I0204 09:17:56.589833 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpbsg" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="registry-server" containerID="cri-o://af43624fe1bd98f49bd998e55ba46c2bdf32913173006ea3e02858bbd1002520" gracePeriod=2 Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.046104 4644 generic.go:334] "Generic (PLEG): container finished" podID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerID="af43624fe1bd98f49bd998e55ba46c2bdf32913173006ea3e02858bbd1002520" exitCode=0 Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.046200 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerDied","Data":"af43624fe1bd98f49bd998e55ba46c2bdf32913173006ea3e02858bbd1002520"} Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.784484 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.842636 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities\") pod \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.842782 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content\") pod \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.842835 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cglq\" (UniqueName: \"kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq\") pod \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\" (UID: \"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb\") " Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.851345 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities" (OuterVolumeSpecName: "utilities") pod "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" (UID: "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.867056 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq" (OuterVolumeSpecName: "kube-api-access-9cglq") pod "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" (UID: "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb"). InnerVolumeSpecName "kube-api-access-9cglq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.894940 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.895691 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.945273 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:57 crc kubenswrapper[4644]: I0204 09:17:57.945309 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cglq\" (UniqueName: \"kubernetes.io/projected/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-kube-api-access-9cglq\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.038389 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" (UID: "2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.047019 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.058854 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpbsg" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.067934 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpbsg" event={"ID":"2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb","Type":"ContainerDied","Data":"afa1a0a78933a0300d6a0f2965e38e032f504852b2fd950cc9364a47e2cceb96"} Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.068065 4644 scope.go:117] "RemoveContainer" containerID="af43624fe1bd98f49bd998e55ba46c2bdf32913173006ea3e02858bbd1002520" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.108886 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.110315 4644 scope.go:117] "RemoveContainer" containerID="08493a6f0f9deffa0c1ac9d3a78153cfa270f19b6d60099f8f4e04bf961166f6" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.126165 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpbsg"] Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.144367 4644 scope.go:117] "RemoveContainer" containerID="2ef9e76ef0b511b7294a3f2fc0586a3084cd1ff73e0b600867e69d90ee1a1df3" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.675296 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" path="/var/lib/kubelet/pods/2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb/volumes" Feb 04 09:17:58 crc kubenswrapper[4644]: I0204 09:17:58.961761 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ppp2r" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="registry-server" probeResult="failure" output=< Feb 04 09:17:58 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:17:58 crc kubenswrapper[4644]: > Feb 04 09:18:07 crc kubenswrapper[4644]: I0204 09:18:07.977006 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:18:08 crc kubenswrapper[4644]: I0204 09:18:08.033957 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:18:10 crc kubenswrapper[4644]: I0204 09:18:10.966936 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:18:10 crc kubenswrapper[4644]: I0204 09:18:10.968390 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ppp2r" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="registry-server" containerID="cri-o://bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714" gracePeriod=2 Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.039116 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.188778 4644 generic.go:334] "Generic (PLEG): container finished" podID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerID="bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714" exitCode=0 Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.188838 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerDied","Data":"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714"} Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.188871 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppp2r" event={"ID":"06c2978d-e0bd-4430-8c16-67f5d81284d5","Type":"ContainerDied","Data":"60e83882a58d53dbdb6dfbf35243f73d85a8d6320f7c4a643a60d9ff2463b308"} Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.188890 4644 scope.go:117] "RemoveContainer" containerID="bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.188897 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppp2r" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.206953 4644 scope.go:117] "RemoveContainer" containerID="fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.222365 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content\") pod \"06c2978d-e0bd-4430-8c16-67f5d81284d5\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.222438 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbcbh\" (UniqueName: \"kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh\") pod \"06c2978d-e0bd-4430-8c16-67f5d81284d5\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.222539 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities\") pod \"06c2978d-e0bd-4430-8c16-67f5d81284d5\" (UID: \"06c2978d-e0bd-4430-8c16-67f5d81284d5\") " Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.223623 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities" (OuterVolumeSpecName: "utilities") pod "06c2978d-e0bd-4430-8c16-67f5d81284d5" (UID: "06c2978d-e0bd-4430-8c16-67f5d81284d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.230766 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh" (OuterVolumeSpecName: "kube-api-access-mbcbh") pod "06c2978d-e0bd-4430-8c16-67f5d81284d5" (UID: "06c2978d-e0bd-4430-8c16-67f5d81284d5"). InnerVolumeSpecName "kube-api-access-mbcbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.233055 4644 scope.go:117] "RemoveContainer" containerID="510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.250921 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c2978d-e0bd-4430-8c16-67f5d81284d5" (UID: "06c2978d-e0bd-4430-8c16-67f5d81284d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.324564 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.324614 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c2978d-e0bd-4430-8c16-67f5d81284d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.324632 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbcbh\" (UniqueName: \"kubernetes.io/projected/06c2978d-e0bd-4430-8c16-67f5d81284d5-kube-api-access-mbcbh\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.325058 4644 scope.go:117] "RemoveContainer" containerID="bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714" Feb 04 09:18:12 crc kubenswrapper[4644]: E0204 09:18:12.325719 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714\": container with ID starting with bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714 not found: ID does not exist" containerID="bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.325767 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714"} err="failed to get container status \"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714\": rpc error: code = NotFound desc = could not find container \"bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714\": container with ID starting with bf08b2c4b1c89ed6d0aaf416bb31b33d7c159aa79d95460cc962c9d4096c7714 not found: ID does not exist" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.325794 4644 scope.go:117] "RemoveContainer" containerID="fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039" Feb 04 09:18:12 crc kubenswrapper[4644]: E0204 09:18:12.326224 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039\": container with ID starting with fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039 not found: ID does not exist" containerID="fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.326250 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039"} err="failed to get container status \"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039\": rpc error: code = NotFound desc = could not find container \"fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039\": container with ID starting with fa4a85134f8f4f180b94f6fc101c7406220e01f3f8609aa16451722233756039 not found: ID does not exist" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.326267 4644 scope.go:117] "RemoveContainer" containerID="510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc" Feb 04 09:18:12 crc kubenswrapper[4644]: E0204 09:18:12.327225 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc\": container with ID starting with 510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc not found: ID does not exist" containerID="510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.327252 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc"} err="failed to get container status \"510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc\": rpc error: code = NotFound desc = could not find container \"510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc\": container with ID starting with 510a4bc9b38784a0e097faf14a77a547f9dd3f0be3009ddbca6025a3799876cc not found: ID does not exist" Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.521787 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.541442 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppp2r"] Feb 04 09:18:12 crc kubenswrapper[4644]: I0204 09:18:12.673568 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" path="/var/lib/kubelet/pods/06c2978d-e0bd-4430-8c16-67f5d81284d5/volumes" Feb 04 09:18:28 crc kubenswrapper[4644]: I0204 09:18:28.323386 4644 generic.go:334] "Generic (PLEG): container finished" podID="da0998d9-9cc2-4e46-ac4f-f47ec801a998" containerID="8b816530cdc7437f3318a1d46acc561398ceec346d687c35a8d240edbe995cbb" exitCode=0 Feb 04 09:18:28 crc kubenswrapper[4644]: I0204 09:18:28.323467 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" event={"ID":"da0998d9-9cc2-4e46-ac4f-f47ec801a998","Type":"ContainerDied","Data":"8b816530cdc7437f3318a1d46acc561398ceec346d687c35a8d240edbe995cbb"} Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.845177 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987069 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987127 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987214 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzl8\" (UniqueName: \"kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987295 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987346 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:29 crc kubenswrapper[4644]: I0204 09:18:29.987413 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory\") pod \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\" (UID: \"da0998d9-9cc2-4e46-ac4f-f47ec801a998\") " Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.001545 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.006024 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8" (OuterVolumeSpecName: "kube-api-access-bvzl8") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "kube-api-access-bvzl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.015974 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory" (OuterVolumeSpecName: "inventory") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.018098 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.018649 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.022420 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da0998d9-9cc2-4e46-ac4f-f47ec801a998" (UID: "da0998d9-9cc2-4e46-ac4f-f47ec801a998"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095523 4644 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095560 4644 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095569 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzl8\" (UniqueName: \"kubernetes.io/projected/da0998d9-9cc2-4e46-ac4f-f47ec801a998-kube-api-access-bvzl8\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095583 4644 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095592 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.095601 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0998d9-9cc2-4e46-ac4f-f47ec801a998-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.341727 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" event={"ID":"da0998d9-9cc2-4e46-ac4f-f47ec801a998","Type":"ContainerDied","Data":"eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357"} Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.341805 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7b2bca9439d3bf62bd4446003f3beb5b960c0aa4ca35ff5494936c5ea17357" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.341806 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456230 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph"] Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456668 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="extract-content" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456690 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="extract-content" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456702 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="extract-utilities" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456710 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="extract-utilities" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456735 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456744 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456755 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="extract-content" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456764 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="extract-content" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456785 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="extract-utilities" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456795 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="extract-utilities" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456810 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456817 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: E0204 09:18:30.456841 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0998d9-9cc2-4e46-ac4f-f47ec801a998" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.456851 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0998d9-9cc2-4e46-ac4f-f47ec801a998" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.457061 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3dbc0d-e25d-47cb-a2f0-b24ac07bcfbb" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.457076 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c2978d-e0bd-4430-8c16-67f5d81284d5" containerName="registry-server" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.457102 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0998d9-9cc2-4e46-ac4f-f47ec801a998" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.457800 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.466852 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.467195 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.467374 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.467557 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.467845 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.468864 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph"] Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.502372 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.502424 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcqp\" (UniqueName: \"kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.502503 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.502560 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.502596 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.604208 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.604268 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.604377 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.604410 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phcqp\" (UniqueName: \"kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.604490 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.606195 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.607653 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.608600 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.607830 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.622274 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcqp\" (UniqueName: \"kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.624768 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.624966 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.625244 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b44ph\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.778180 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:18:30 crc kubenswrapper[4644]: I0204 09:18:30.787104 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:18:31 crc kubenswrapper[4644]: I0204 09:18:31.326040 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph"] Feb 04 09:18:31 crc kubenswrapper[4644]: W0204 09:18:31.337246 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a074a3e_62ea_4cb2_96f3_ccce51518ad3.slice/crio-1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662 WatchSource:0}: Error finding container 1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662: Status 404 returned error can't find the container with id 1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662 Feb 04 09:18:31 crc kubenswrapper[4644]: I0204 09:18:31.356423 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" event={"ID":"5a074a3e-62ea-4cb2-96f3-ccce51518ad3","Type":"ContainerStarted","Data":"1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662"} Feb 04 09:18:32 crc kubenswrapper[4644]: I0204 09:18:32.071321 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:18:33 crc kubenswrapper[4644]: I0204 09:18:33.373320 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" event={"ID":"5a074a3e-62ea-4cb2-96f3-ccce51518ad3","Type":"ContainerStarted","Data":"a7f260113fbe343637c026d51f2d1788c42b04f612b7fdfd556e5c3fb77dae57"} Feb 04 09:18:33 crc kubenswrapper[4644]: I0204 09:18:33.395959 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" podStartSLOduration=2.668670884 podStartE2EDuration="3.395939375s" podCreationTimestamp="2026-02-04 09:18:30 +0000 UTC" firstStartedPulling="2026-02-04 09:18:31.340742406 +0000 UTC m=+2221.380800161" lastFinishedPulling="2026-02-04 09:18:32.068010897 +0000 UTC m=+2222.108068652" observedRunningTime="2026-02-04 09:18:33.392295396 +0000 UTC m=+2223.432353151" watchObservedRunningTime="2026-02-04 09:18:33.395939375 +0000 UTC m=+2223.435997160" Feb 04 09:18:35 crc kubenswrapper[4644]: I0204 09:18:35.554932 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:18:35 crc kubenswrapper[4644]: I0204 09:18:35.555323 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:19:05 crc kubenswrapper[4644]: I0204 09:19:05.554879 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:19:05 crc kubenswrapper[4644]: I0204 09:19:05.555754 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.814117 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.817973 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.856840 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.946830 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4t62\" (UniqueName: \"kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.946908 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:18 crc kubenswrapper[4644]: I0204 09:19:18.947983 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.049893 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4t62\" (UniqueName: \"kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.049961 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.050006 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.050559 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.050626 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.075117 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4t62\" (UniqueName: \"kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62\") pod \"community-operators-6qtgr\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.165391 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.745003 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:19 crc kubenswrapper[4644]: I0204 09:19:19.836221 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerStarted","Data":"4203fd470504878861a03e405decf2ba641917e3cd3b23cb7ac37fcb8433efaf"} Feb 04 09:19:20 crc kubenswrapper[4644]: I0204 09:19:20.844758 4644 generic.go:334] "Generic (PLEG): container finished" podID="734879ec-9bca-4287-877f-13baf866e62e" containerID="71322f48a37a38745c69b0c887faca06ca65939f142815179d170a0f3d8a1428" exitCode=0 Feb 04 09:19:20 crc kubenswrapper[4644]: I0204 09:19:20.844853 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerDied","Data":"71322f48a37a38745c69b0c887faca06ca65939f142815179d170a0f3d8a1428"} Feb 04 09:19:20 crc kubenswrapper[4644]: I0204 09:19:20.847658 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:19:21 crc kubenswrapper[4644]: I0204 09:19:21.854973 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerStarted","Data":"d4246f2e41cfd68514cda779254cf06a44e64af7c291fb3586652a2aace780ea"} Feb 04 09:19:25 crc kubenswrapper[4644]: I0204 09:19:25.895500 4644 generic.go:334] "Generic (PLEG): container finished" podID="734879ec-9bca-4287-877f-13baf866e62e" containerID="d4246f2e41cfd68514cda779254cf06a44e64af7c291fb3586652a2aace780ea" exitCode=0 Feb 04 09:19:25 crc kubenswrapper[4644]: I0204 09:19:25.895595 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerDied","Data":"d4246f2e41cfd68514cda779254cf06a44e64af7c291fb3586652a2aace780ea"} Feb 04 09:19:26 crc kubenswrapper[4644]: I0204 09:19:26.908928 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerStarted","Data":"9680a145cf420a4d9b783ddfc461caf7042218ee3eb36189925eb96ae978acd9"} Feb 04 09:19:26 crc kubenswrapper[4644]: I0204 09:19:26.964101 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qtgr" podStartSLOduration=3.350850341 podStartE2EDuration="8.964072765s" podCreationTimestamp="2026-02-04 09:19:18 +0000 UTC" firstStartedPulling="2026-02-04 09:19:20.847420842 +0000 UTC m=+2270.887478597" lastFinishedPulling="2026-02-04 09:19:26.460643266 +0000 UTC m=+2276.500701021" observedRunningTime="2026-02-04 09:19:26.957679092 +0000 UTC m=+2276.997736847" watchObservedRunningTime="2026-02-04 09:19:26.964072765 +0000 UTC m=+2277.004130530" Feb 04 09:19:29 crc kubenswrapper[4644]: I0204 09:19:29.165803 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:29 crc kubenswrapper[4644]: I0204 09:19:29.165859 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:29 crc kubenswrapper[4644]: I0204 09:19:29.214802 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:35 crc kubenswrapper[4644]: I0204 09:19:35.555075 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:19:35 crc kubenswrapper[4644]: I0204 09:19:35.555722 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:19:35 crc kubenswrapper[4644]: I0204 09:19:35.555782 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:19:35 crc kubenswrapper[4644]: I0204 09:19:35.556669 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:19:35 crc kubenswrapper[4644]: I0204 09:19:35.556737 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" gracePeriod=600 Feb 04 09:19:36 crc kubenswrapper[4644]: I0204 09:19:36.017713 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" exitCode=0 Feb 04 09:19:36 crc kubenswrapper[4644]: I0204 09:19:36.017800 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb"} Feb 04 09:19:36 crc kubenswrapper[4644]: I0204 09:19:36.018134 4644 scope.go:117] "RemoveContainer" containerID="a8f1f22f8bb04b29ec4bff87a7286a4cc5bae3e174104b35d24f389917224840" Feb 04 09:19:36 crc kubenswrapper[4644]: E0204 09:19:36.212856 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:19:37 crc kubenswrapper[4644]: I0204 09:19:37.032049 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:19:37 crc kubenswrapper[4644]: E0204 09:19:37.032791 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:19:39 crc kubenswrapper[4644]: I0204 09:19:39.213866 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:39 crc kubenswrapper[4644]: I0204 09:19:39.270291 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:40 crc kubenswrapper[4644]: I0204 09:19:40.056225 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qtgr" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="registry-server" containerID="cri-o://9680a145cf420a4d9b783ddfc461caf7042218ee3eb36189925eb96ae978acd9" gracePeriod=2 Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.104930 4644 generic.go:334] "Generic (PLEG): container finished" podID="734879ec-9bca-4287-877f-13baf866e62e" containerID="9680a145cf420a4d9b783ddfc461caf7042218ee3eb36189925eb96ae978acd9" exitCode=0 Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.105022 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerDied","Data":"9680a145cf420a4d9b783ddfc461caf7042218ee3eb36189925eb96ae978acd9"} Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.467079 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.511109 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content\") pod \"734879ec-9bca-4287-877f-13baf866e62e\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.511254 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4t62\" (UniqueName: \"kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62\") pod \"734879ec-9bca-4287-877f-13baf866e62e\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.511565 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities\") pod \"734879ec-9bca-4287-877f-13baf866e62e\" (UID: \"734879ec-9bca-4287-877f-13baf866e62e\") " Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.513008 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities" (OuterVolumeSpecName: "utilities") pod "734879ec-9bca-4287-877f-13baf866e62e" (UID: "734879ec-9bca-4287-877f-13baf866e62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.519731 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62" (OuterVolumeSpecName: "kube-api-access-t4t62") pod "734879ec-9bca-4287-877f-13baf866e62e" (UID: "734879ec-9bca-4287-877f-13baf866e62e"). InnerVolumeSpecName "kube-api-access-t4t62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.613908 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4t62\" (UniqueName: \"kubernetes.io/projected/734879ec-9bca-4287-877f-13baf866e62e-kube-api-access-t4t62\") on node \"crc\" DevicePath \"\"" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.613953 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.668029 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "734879ec-9bca-4287-877f-13baf866e62e" (UID: "734879ec-9bca-4287-877f-13baf866e62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:19:41 crc kubenswrapper[4644]: I0204 09:19:41.716675 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734879ec-9bca-4287-877f-13baf866e62e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.116702 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qtgr" event={"ID":"734879ec-9bca-4287-877f-13baf866e62e","Type":"ContainerDied","Data":"4203fd470504878861a03e405decf2ba641917e3cd3b23cb7ac37fcb8433efaf"} Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.116756 4644 scope.go:117] "RemoveContainer" containerID="9680a145cf420a4d9b783ddfc461caf7042218ee3eb36189925eb96ae978acd9" Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.116757 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qtgr" Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.139407 4644 scope.go:117] "RemoveContainer" containerID="d4246f2e41cfd68514cda779254cf06a44e64af7c291fb3586652a2aace780ea" Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.162497 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.167815 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qtgr"] Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.171153 4644 scope.go:117] "RemoveContainer" containerID="71322f48a37a38745c69b0c887faca06ca65939f142815179d170a0f3d8a1428" Feb 04 09:19:42 crc kubenswrapper[4644]: I0204 09:19:42.680596 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734879ec-9bca-4287-877f-13baf866e62e" path="/var/lib/kubelet/pods/734879ec-9bca-4287-877f-13baf866e62e/volumes" Feb 04 09:19:51 crc kubenswrapper[4644]: I0204 09:19:51.661077 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:19:51 crc kubenswrapper[4644]: E0204 09:19:51.661813 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:20:03 crc kubenswrapper[4644]: I0204 09:20:03.659825 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:20:03 crc kubenswrapper[4644]: E0204 09:20:03.660599 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:20:18 crc kubenswrapper[4644]: I0204 09:20:18.660757 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:20:18 crc kubenswrapper[4644]: E0204 09:20:18.661570 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:20:31 crc kubenswrapper[4644]: I0204 09:20:31.660300 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:20:31 crc kubenswrapper[4644]: E0204 09:20:31.662658 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:20:44 crc kubenswrapper[4644]: I0204 09:20:44.660858 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:20:44 crc kubenswrapper[4644]: E0204 09:20:44.661620 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:20:58 crc kubenswrapper[4644]: I0204 09:20:58.659715 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:20:58 crc kubenswrapper[4644]: E0204 09:20:58.660520 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:21:13 crc kubenswrapper[4644]: I0204 09:21:13.659968 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:21:13 crc kubenswrapper[4644]: E0204 09:21:13.660763 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:21:27 crc kubenswrapper[4644]: I0204 09:21:27.661004 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:21:27 crc kubenswrapper[4644]: E0204 09:21:27.662200 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:21:38 crc kubenswrapper[4644]: I0204 09:21:38.662383 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:21:38 crc kubenswrapper[4644]: E0204 09:21:38.663309 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:21:49 crc kubenswrapper[4644]: I0204 09:21:49.660452 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:21:49 crc kubenswrapper[4644]: E0204 09:21:49.667264 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:22:04 crc kubenswrapper[4644]: I0204 09:22:04.660021 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:22:04 crc kubenswrapper[4644]: E0204 09:22:04.660805 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:22:19 crc kubenswrapper[4644]: I0204 09:22:19.659951 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:22:19 crc kubenswrapper[4644]: E0204 09:22:19.660755 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:22:30 crc kubenswrapper[4644]: I0204 09:22:30.676893 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:22:30 crc kubenswrapper[4644]: E0204 09:22:30.678251 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:22:41 crc kubenswrapper[4644]: I0204 09:22:41.660674 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:22:41 crc kubenswrapper[4644]: E0204 09:22:41.661588 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:22:55 crc kubenswrapper[4644]: I0204 09:22:55.659858 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:22:55 crc kubenswrapper[4644]: E0204 09:22:55.660763 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:23:00 crc kubenswrapper[4644]: I0204 09:23:00.913760 4644 generic.go:334] "Generic (PLEG): container finished" podID="5a074a3e-62ea-4cb2-96f3-ccce51518ad3" containerID="a7f260113fbe343637c026d51f2d1788c42b04f612b7fdfd556e5c3fb77dae57" exitCode=0 Feb 04 09:23:00 crc kubenswrapper[4644]: I0204 09:23:00.914250 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" event={"ID":"5a074a3e-62ea-4cb2-96f3-ccce51518ad3","Type":"ContainerDied","Data":"a7f260113fbe343637c026d51f2d1788c42b04f612b7fdfd556e5c3fb77dae57"} Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.323423 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.431462 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phcqp\" (UniqueName: \"kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp\") pod \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.432056 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle\") pod \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.432665 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam\") pod \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.432768 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory\") pod \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.432887 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0\") pod \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\" (UID: \"5a074a3e-62ea-4cb2-96f3-ccce51518ad3\") " Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.438143 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5a074a3e-62ea-4cb2-96f3-ccce51518ad3" (UID: "5a074a3e-62ea-4cb2-96f3-ccce51518ad3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.443615 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp" (OuterVolumeSpecName: "kube-api-access-phcqp") pod "5a074a3e-62ea-4cb2-96f3-ccce51518ad3" (UID: "5a074a3e-62ea-4cb2-96f3-ccce51518ad3"). InnerVolumeSpecName "kube-api-access-phcqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.460130 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a074a3e-62ea-4cb2-96f3-ccce51518ad3" (UID: "5a074a3e-62ea-4cb2-96f3-ccce51518ad3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.462708 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory" (OuterVolumeSpecName: "inventory") pod "5a074a3e-62ea-4cb2-96f3-ccce51518ad3" (UID: "5a074a3e-62ea-4cb2-96f3-ccce51518ad3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.467105 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5a074a3e-62ea-4cb2-96f3-ccce51518ad3" (UID: "5a074a3e-62ea-4cb2-96f3-ccce51518ad3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.535536 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.535572 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.535583 4644 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.535593 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phcqp\" (UniqueName: \"kubernetes.io/projected/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-kube-api-access-phcqp\") on node \"crc\" DevicePath \"\"" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.535602 4644 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a074a3e-62ea-4cb2-96f3-ccce51518ad3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.936089 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" event={"ID":"5a074a3e-62ea-4cb2-96f3-ccce51518ad3","Type":"ContainerDied","Data":"1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662"} Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.936139 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1667cbcd8382b166b0b4720dc96c311bfe17df7a278c1aaec802c161faae1662" Feb 04 09:23:02 crc kubenswrapper[4644]: I0204 09:23:02.936175 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b44ph" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.131892 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq"] Feb 04 09:23:03 crc kubenswrapper[4644]: E0204 09:23:03.132354 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a074a3e-62ea-4cb2-96f3-ccce51518ad3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132376 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a074a3e-62ea-4cb2-96f3-ccce51518ad3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 09:23:03 crc kubenswrapper[4644]: E0204 09:23:03.132410 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="registry-server" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132418 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="registry-server" Feb 04 09:23:03 crc kubenswrapper[4644]: E0204 09:23:03.132441 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="extract-content" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132451 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="extract-content" Feb 04 09:23:03 crc kubenswrapper[4644]: E0204 09:23:03.132468 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="extract-utilities" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132477 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="extract-utilities" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132696 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="734879ec-9bca-4287-877f-13baf866e62e" containerName="registry-server" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.132723 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a074a3e-62ea-4cb2-96f3-ccce51518ad3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.133462 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.137957 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.138258 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.138422 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.138653 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.138849 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.139037 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.139186 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.157043 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq"] Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.247928 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248087 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248144 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248190 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248260 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248350 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248474 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqph\" (UniqueName: \"kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248545 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.248683 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350637 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350744 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350808 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350837 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350874 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350906 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.350962 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.351016 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqph\" (UniqueName: \"kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.351045 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.352478 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.355735 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.356454 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.356988 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.358701 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.360042 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.364064 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.365750 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.375144 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqph\" (UniqueName: \"kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wbclq\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:03 crc kubenswrapper[4644]: I0204 09:23:03.456934 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:23:04 crc kubenswrapper[4644]: I0204 09:23:04.390535 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq"] Feb 04 09:23:04 crc kubenswrapper[4644]: I0204 09:23:04.982204 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" event={"ID":"7446c79e-b931-43ae-85a0-f21ab513e5e7","Type":"ContainerStarted","Data":"c33a62f525ce1645c37e178afa7f20d10adba9ec6ad1c855d237af98a48243eb"} Feb 04 09:23:05 crc kubenswrapper[4644]: I0204 09:23:05.994997 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" event={"ID":"7446c79e-b931-43ae-85a0-f21ab513e5e7","Type":"ContainerStarted","Data":"8c615cdefca0908596e61177efd06cd0e3496fe12db8a2d9b7e9e043384b7ad1"} Feb 04 09:23:07 crc kubenswrapper[4644]: I0204 09:23:07.660366 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:23:07 crc kubenswrapper[4644]: E0204 09:23:07.660915 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:23:19 crc kubenswrapper[4644]: I0204 09:23:19.660751 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:23:19 crc kubenswrapper[4644]: E0204 09:23:19.661952 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:23:34 crc kubenswrapper[4644]: I0204 09:23:34.659766 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:23:34 crc kubenswrapper[4644]: E0204 09:23:34.660474 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:23:47 crc kubenswrapper[4644]: I0204 09:23:47.660132 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:23:47 crc kubenswrapper[4644]: E0204 09:23:47.660932 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:24:00 crc kubenswrapper[4644]: I0204 09:24:00.660682 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:24:00 crc kubenswrapper[4644]: E0204 09:24:00.661746 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:24:12 crc kubenswrapper[4644]: I0204 09:24:12.661634 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:24:12 crc kubenswrapper[4644]: E0204 09:24:12.662362 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:24:23 crc kubenswrapper[4644]: I0204 09:24:23.659945 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:24:23 crc kubenswrapper[4644]: E0204 09:24:23.660681 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:24:35 crc kubenswrapper[4644]: I0204 09:24:35.659858 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:24:35 crc kubenswrapper[4644]: E0204 09:24:35.660775 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:24:48 crc kubenswrapper[4644]: I0204 09:24:48.660596 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:24:49 crc kubenswrapper[4644]: I0204 09:24:49.848652 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814"} Feb 04 09:24:49 crc kubenswrapper[4644]: I0204 09:24:49.876135 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" podStartSLOduration=106.462784177 podStartE2EDuration="1m46.876110269s" podCreationTimestamp="2026-02-04 09:23:03 +0000 UTC" firstStartedPulling="2026-02-04 09:23:04.393909222 +0000 UTC m=+2494.433966987" lastFinishedPulling="2026-02-04 09:23:04.807235324 +0000 UTC m=+2494.847293079" observedRunningTime="2026-02-04 09:23:06.022877566 +0000 UTC m=+2496.062935331" watchObservedRunningTime="2026-02-04 09:24:49.876110269 +0000 UTC m=+2599.916168034" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.321762 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.332228 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.368921 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.429315 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7z5\" (UniqueName: \"kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.429460 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.429670 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.531627 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.531718 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7z5\" (UniqueName: \"kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.531782 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.532276 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.532392 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.551934 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7z5\" (UniqueName: \"kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5\") pod \"redhat-operators-v74lk\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:50 crc kubenswrapper[4644]: I0204 09:24:50.657571 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:24:51 crc kubenswrapper[4644]: I0204 09:24:51.155067 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:24:51 crc kubenswrapper[4644]: I0204 09:24:51.868603 4644 generic.go:334] "Generic (PLEG): container finished" podID="eda764ec-43eb-4952-8174-e81035c29bf9" containerID="57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb" exitCode=0 Feb 04 09:24:51 crc kubenswrapper[4644]: I0204 09:24:51.868697 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerDied","Data":"57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb"} Feb 04 09:24:51 crc kubenswrapper[4644]: I0204 09:24:51.869056 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerStarted","Data":"e3ec9c863c64ad78c9958bf46ed4f0c6c4d42dde2e2d105d4107fc2015b995e7"} Feb 04 09:24:51 crc kubenswrapper[4644]: I0204 09:24:51.870966 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:24:53 crc kubenswrapper[4644]: I0204 09:24:53.887934 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerStarted","Data":"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9"} Feb 04 09:24:59 crc kubenswrapper[4644]: I0204 09:24:59.946095 4644 generic.go:334] "Generic (PLEG): container finished" podID="eda764ec-43eb-4952-8174-e81035c29bf9" containerID="9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9" exitCode=0 Feb 04 09:24:59 crc kubenswrapper[4644]: I0204 09:24:59.946170 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerDied","Data":"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9"} Feb 04 09:25:00 crc kubenswrapper[4644]: I0204 09:25:00.958250 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerStarted","Data":"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec"} Feb 04 09:25:00 crc kubenswrapper[4644]: I0204 09:25:00.978469 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v74lk" podStartSLOduration=2.519750647 podStartE2EDuration="10.978447878s" podCreationTimestamp="2026-02-04 09:24:50 +0000 UTC" firstStartedPulling="2026-02-04 09:24:51.870735459 +0000 UTC m=+2601.910793214" lastFinishedPulling="2026-02-04 09:25:00.32943268 +0000 UTC m=+2610.369490445" observedRunningTime="2026-02-04 09:25:00.972707993 +0000 UTC m=+2611.012765748" watchObservedRunningTime="2026-02-04 09:25:00.978447878 +0000 UTC m=+2611.018505633" Feb 04 09:25:10 crc kubenswrapper[4644]: I0204 09:25:10.658527 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:10 crc kubenswrapper[4644]: I0204 09:25:10.665248 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:10 crc kubenswrapper[4644]: I0204 09:25:10.709768 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:11 crc kubenswrapper[4644]: I0204 09:25:11.142699 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:11 crc kubenswrapper[4644]: I0204 09:25:11.187590 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.078945 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v74lk" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="registry-server" containerID="cri-o://60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec" gracePeriod=2 Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.569838 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.724229 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7z5\" (UniqueName: \"kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5\") pod \"eda764ec-43eb-4952-8174-e81035c29bf9\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.724307 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content\") pod \"eda764ec-43eb-4952-8174-e81035c29bf9\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.726620 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities\") pod \"eda764ec-43eb-4952-8174-e81035c29bf9\" (UID: \"eda764ec-43eb-4952-8174-e81035c29bf9\") " Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.727561 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities" (OuterVolumeSpecName: "utilities") pod "eda764ec-43eb-4952-8174-e81035c29bf9" (UID: "eda764ec-43eb-4952-8174-e81035c29bf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.730267 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5" (OuterVolumeSpecName: "kube-api-access-wl7z5") pod "eda764ec-43eb-4952-8174-e81035c29bf9" (UID: "eda764ec-43eb-4952-8174-e81035c29bf9"). InnerVolumeSpecName "kube-api-access-wl7z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.829880 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.829942 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7z5\" (UniqueName: \"kubernetes.io/projected/eda764ec-43eb-4952-8174-e81035c29bf9-kube-api-access-wl7z5\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.863515 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda764ec-43eb-4952-8174-e81035c29bf9" (UID: "eda764ec-43eb-4952-8174-e81035c29bf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:25:13 crc kubenswrapper[4644]: I0204 09:25:13.931610 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda764ec-43eb-4952-8174-e81035c29bf9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.092163 4644 generic.go:334] "Generic (PLEG): container finished" podID="eda764ec-43eb-4952-8174-e81035c29bf9" containerID="60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec" exitCode=0 Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.092227 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerDied","Data":"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec"} Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.092256 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v74lk" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.092272 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v74lk" event={"ID":"eda764ec-43eb-4952-8174-e81035c29bf9","Type":"ContainerDied","Data":"e3ec9c863c64ad78c9958bf46ed4f0c6c4d42dde2e2d105d4107fc2015b995e7"} Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.092291 4644 scope.go:117] "RemoveContainer" containerID="60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.126922 4644 scope.go:117] "RemoveContainer" containerID="9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.145961 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.153980 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v74lk"] Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.172443 4644 scope.go:117] "RemoveContainer" containerID="57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.211167 4644 scope.go:117] "RemoveContainer" containerID="60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec" Feb 04 09:25:14 crc kubenswrapper[4644]: E0204 09:25:14.211728 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec\": container with ID starting with 60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec not found: ID does not exist" containerID="60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.211770 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec"} err="failed to get container status \"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec\": rpc error: code = NotFound desc = could not find container \"60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec\": container with ID starting with 60c0b346adaa1276463be7586685bd422ed0b4438d5dcb22c5d67de2e7494eec not found: ID does not exist" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.211796 4644 scope.go:117] "RemoveContainer" containerID="9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9" Feb 04 09:25:14 crc kubenswrapper[4644]: E0204 09:25:14.212011 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9\": container with ID starting with 9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9 not found: ID does not exist" containerID="9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.212044 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9"} err="failed to get container status \"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9\": rpc error: code = NotFound desc = could not find container \"9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9\": container with ID starting with 9f7d71ee6d38c103ba63d01402fd5d55d0160b2cb124602916c0a328f0be64c9 not found: ID does not exist" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.212062 4644 scope.go:117] "RemoveContainer" containerID="57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb" Feb 04 09:25:14 crc kubenswrapper[4644]: E0204 09:25:14.212320 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb\": container with ID starting with 57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb not found: ID does not exist" containerID="57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.212394 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb"} err="failed to get container status \"57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb\": rpc error: code = NotFound desc = could not find container \"57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb\": container with ID starting with 57583b84a0a5a395723858d6df793246ac4e14f2659135c66e78aa6c50db4cbb not found: ID does not exist" Feb 04 09:25:14 crc kubenswrapper[4644]: I0204 09:25:14.677395 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" path="/var/lib/kubelet/pods/eda764ec-43eb-4952-8174-e81035c29bf9/volumes" Feb 04 09:25:30 crc kubenswrapper[4644]: I0204 09:25:30.243126 4644 generic.go:334] "Generic (PLEG): container finished" podID="7446c79e-b931-43ae-85a0-f21ab513e5e7" containerID="8c615cdefca0908596e61177efd06cd0e3496fe12db8a2d9b7e9e043384b7ad1" exitCode=0 Feb 04 09:25:30 crc kubenswrapper[4644]: I0204 09:25:30.243257 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" event={"ID":"7446c79e-b931-43ae-85a0-f21ab513e5e7","Type":"ContainerDied","Data":"8c615cdefca0908596e61177efd06cd0e3496fe12db8a2d9b7e9e043384b7ad1"} Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.723128 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.786840 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jqph\" (UniqueName: \"kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.786906 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.786954 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.786987 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.794008 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph" (OuterVolumeSpecName: "kube-api-access-6jqph") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "kube-api-access-6jqph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.821967 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.830842 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory" (OuterVolumeSpecName: "inventory") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.831300 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.889246 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.889291 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.889671 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.889725 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.889891 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0\") pod \"7446c79e-b931-43ae-85a0-f21ab513e5e7\" (UID: \"7446c79e-b931-43ae-85a0-f21ab513e5e7\") " Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.890370 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.890395 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.890410 4644 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.890422 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jqph\" (UniqueName: \"kubernetes.io/projected/7446c79e-b931-43ae-85a0-f21ab513e5e7-kube-api-access-6jqph\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.894491 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.919300 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.923965 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.924007 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.933986 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7446c79e-b931-43ae-85a0-f21ab513e5e7" (UID: "7446c79e-b931-43ae-85a0-f21ab513e5e7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.991356 4644 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.991591 4644 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.991632 4644 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.991646 4644 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:31 crc kubenswrapper[4644]: I0204 09:25:31.991661 4644 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7446c79e-b931-43ae-85a0-f21ab513e5e7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.260233 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" event={"ID":"7446c79e-b931-43ae-85a0-f21ab513e5e7","Type":"ContainerDied","Data":"c33a62f525ce1645c37e178afa7f20d10adba9ec6ad1c855d237af98a48243eb"} Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.260268 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33a62f525ce1645c37e178afa7f20d10adba9ec6ad1c855d237af98a48243eb" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.260572 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wbclq" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.378599 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k"] Feb 04 09:25:32 crc kubenswrapper[4644]: E0204 09:25:32.379038 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="extract-utilities" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379061 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="extract-utilities" Feb 04 09:25:32 crc kubenswrapper[4644]: E0204 09:25:32.379082 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="registry-server" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379092 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="registry-server" Feb 04 09:25:32 crc kubenswrapper[4644]: E0204 09:25:32.379106 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="extract-content" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379115 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="extract-content" Feb 04 09:25:32 crc kubenswrapper[4644]: E0204 09:25:32.379130 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7446c79e-b931-43ae-85a0-f21ab513e5e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379138 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7446c79e-b931-43ae-85a0-f21ab513e5e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379383 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda764ec-43eb-4952-8174-e81035c29bf9" containerName="registry-server" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.379413 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="7446c79e-b931-43ae-85a0-f21ab513e5e7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.380173 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.382272 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.382494 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.382596 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hwgzk" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.383222 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.383448 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.401751 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k"] Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502091 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502423 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502563 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502787 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502816 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.502922 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnctb\" (UniqueName: \"kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.503012 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604621 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnctb\" (UniqueName: \"kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604729 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604792 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604831 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604875 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604959 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.604984 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.610575 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.610775 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.612135 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.612871 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.613820 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.614070 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.623104 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnctb\" (UniqueName: \"kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gv78k\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:32 crc kubenswrapper[4644]: I0204 09:25:32.702458 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:25:33 crc kubenswrapper[4644]: I0204 09:25:33.236662 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k"] Feb 04 09:25:33 crc kubenswrapper[4644]: I0204 09:25:33.269244 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" event={"ID":"feb1a5d9-f2df-4534-8a80-73d11c854b35","Type":"ContainerStarted","Data":"91d96129b0dd8d34950fe3bd2ab9a806a252f88d9420b4bbe1912909a098e43d"} Feb 04 09:25:35 crc kubenswrapper[4644]: I0204 09:25:35.289154 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" event={"ID":"feb1a5d9-f2df-4534-8a80-73d11c854b35","Type":"ContainerStarted","Data":"55a65bbece94eda37846633e0a1c1e89ec135c11a1d22dabd04bc1225c9e533f"} Feb 04 09:25:35 crc kubenswrapper[4644]: I0204 09:25:35.317785 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" podStartSLOduration=2.423198325 podStartE2EDuration="3.317764385s" podCreationTimestamp="2026-02-04 09:25:32 +0000 UTC" firstStartedPulling="2026-02-04 09:25:33.252024103 +0000 UTC m=+2643.292081858" lastFinishedPulling="2026-02-04 09:25:34.146590133 +0000 UTC m=+2644.186647918" observedRunningTime="2026-02-04 09:25:35.309901802 +0000 UTC m=+2645.349959568" watchObservedRunningTime="2026-02-04 09:25:35.317764385 +0000 UTC m=+2645.357822150" Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.611458 4644 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l6nqv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.611515 4644 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l6nqv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.612301 4644 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" podUID="1380462d-7e7c-4c20-859c-4132b703369e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.612361 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l6nqv" podUID="1380462d-7e7c-4c20-859c-4132b703369e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.637496 4644 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-xh4t4 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 09:26:04 crc kubenswrapper[4644]: I0204 09:26:04.637557 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-xh4t4" podUID="d9babe17-48df-46b7-9d27-a6698abfa7e7" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 09:27:05 crc kubenswrapper[4644]: I0204 09:27:05.555235 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:27:05 crc kubenswrapper[4644]: I0204 09:27:05.555791 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:27:35 crc kubenswrapper[4644]: I0204 09:27:35.555529 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:27:35 crc kubenswrapper[4644]: I0204 09:27:35.556033 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.102968 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.106840 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.172635 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.181379 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqch\" (UniqueName: \"kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.181456 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.181495 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.282883 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqch\" (UniqueName: \"kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.283048 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.283107 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.283755 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.283883 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.305675 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqch\" (UniqueName: \"kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch\") pod \"certified-operators-59m77\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.436507 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:44 crc kubenswrapper[4644]: I0204 09:27:44.776958 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:27:45 crc kubenswrapper[4644]: I0204 09:27:45.480585 4644 generic.go:334] "Generic (PLEG): container finished" podID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerID="96676f00fa2edf713249e0efc5d9ad7da8528f5285dfcba1108f9d80c3964b62" exitCode=0 Feb 04 09:27:45 crc kubenswrapper[4644]: I0204 09:27:45.481599 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerDied","Data":"96676f00fa2edf713249e0efc5d9ad7da8528f5285dfcba1108f9d80c3964b62"} Feb 04 09:27:45 crc kubenswrapper[4644]: I0204 09:27:45.481712 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerStarted","Data":"67555de3fe0d8ef905fc46a7d012177fdb3ec0270dc087b1081c2380733f9a99"} Feb 04 09:27:47 crc kubenswrapper[4644]: I0204 09:27:47.506124 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerStarted","Data":"b4a01a7261da737cb001cfe3784dbd29573d00f85736a47a4ce0206278664d5d"} Feb 04 09:27:48 crc kubenswrapper[4644]: I0204 09:27:48.518738 4644 generic.go:334] "Generic (PLEG): container finished" podID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerID="b4a01a7261da737cb001cfe3784dbd29573d00f85736a47a4ce0206278664d5d" exitCode=0 Feb 04 09:27:48 crc kubenswrapper[4644]: I0204 09:27:48.518869 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerDied","Data":"b4a01a7261da737cb001cfe3784dbd29573d00f85736a47a4ce0206278664d5d"} Feb 04 09:27:49 crc kubenswrapper[4644]: I0204 09:27:49.530110 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerStarted","Data":"fad636f4441ce0356fc5f3a6ade703fd77a526c86ad71cea4cedeede610b6876"} Feb 04 09:27:49 crc kubenswrapper[4644]: I0204 09:27:49.551389 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-59m77" podStartSLOduration=2.048330135 podStartE2EDuration="5.551367006s" podCreationTimestamp="2026-02-04 09:27:44 +0000 UTC" firstStartedPulling="2026-02-04 09:27:45.483133614 +0000 UTC m=+2775.523191369" lastFinishedPulling="2026-02-04 09:27:48.986170485 +0000 UTC m=+2779.026228240" observedRunningTime="2026-02-04 09:27:49.546875426 +0000 UTC m=+2779.586933201" watchObservedRunningTime="2026-02-04 09:27:49.551367006 +0000 UTC m=+2779.591424761" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.489224 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.491479 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.501139 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.661073 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.661115 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.661154 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmwn\" (UniqueName: \"kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.763470 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.763540 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.763615 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmwn\" (UniqueName: \"kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.764833 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.765733 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.784357 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmwn\" (UniqueName: \"kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn\") pod \"redhat-marketplace-ffp5r\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:52 crc kubenswrapper[4644]: I0204 09:27:52.823838 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:27:53 crc kubenswrapper[4644]: I0204 09:27:53.393140 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:27:53 crc kubenswrapper[4644]: I0204 09:27:53.566114 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerStarted","Data":"9f33fb0635104c2db6c55e36e1bfa069005353692988ed7d6e4d133e0a02f454"} Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.437289 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.437349 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.485610 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.575001 4644 generic.go:334] "Generic (PLEG): container finished" podID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerID="450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c" exitCode=0 Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.575046 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerDied","Data":"450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c"} Feb 04 09:27:54 crc kubenswrapper[4644]: I0204 09:27:54.628307 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:55 crc kubenswrapper[4644]: I0204 09:27:55.586388 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerStarted","Data":"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf"} Feb 04 09:27:56 crc kubenswrapper[4644]: I0204 09:27:56.858224 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:27:56 crc kubenswrapper[4644]: I0204 09:27:56.859348 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-59m77" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="registry-server" containerID="cri-o://fad636f4441ce0356fc5f3a6ade703fd77a526c86ad71cea4cedeede610b6876" gracePeriod=2 Feb 04 09:27:57 crc kubenswrapper[4644]: E0204 09:27:57.425947 4644 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056fe5d5_c973_4f22_b71f_c9821ec0f815.slice/crio-conmon-57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf.scope\": RecentStats: unable to find data in memory cache]" Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.610493 4644 generic.go:334] "Generic (PLEG): container finished" podID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerID="57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf" exitCode=0 Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.610564 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerDied","Data":"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf"} Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.615093 4644 generic.go:334] "Generic (PLEG): container finished" podID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerID="fad636f4441ce0356fc5f3a6ade703fd77a526c86ad71cea4cedeede610b6876" exitCode=0 Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.615155 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerDied","Data":"fad636f4441ce0356fc5f3a6ade703fd77a526c86ad71cea4cedeede610b6876"} Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.847962 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.965524 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content\") pod \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.965581 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqch\" (UniqueName: \"kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch\") pod \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.965765 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities\") pod \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\" (UID: \"dd020b40-daef-4b1c-af94-3c5c43ee8fc9\") " Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.966853 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities" (OuterVolumeSpecName: "utilities") pod "dd020b40-daef-4b1c-af94-3c5c43ee8fc9" (UID: "dd020b40-daef-4b1c-af94-3c5c43ee8fc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:27:57 crc kubenswrapper[4644]: I0204 09:27:57.971604 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch" (OuterVolumeSpecName: "kube-api-access-lkqch") pod "dd020b40-daef-4b1c-af94-3c5c43ee8fc9" (UID: "dd020b40-daef-4b1c-af94-3c5c43ee8fc9"). InnerVolumeSpecName "kube-api-access-lkqch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.021779 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd020b40-daef-4b1c-af94-3c5c43ee8fc9" (UID: "dd020b40-daef-4b1c-af94-3c5c43ee8fc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.067478 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.067508 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.067522 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqch\" (UniqueName: \"kubernetes.io/projected/dd020b40-daef-4b1c-af94-3c5c43ee8fc9-kube-api-access-lkqch\") on node \"crc\" DevicePath \"\"" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.625816 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerStarted","Data":"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c"} Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.632313 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m77" event={"ID":"dd020b40-daef-4b1c-af94-3c5c43ee8fc9","Type":"ContainerDied","Data":"67555de3fe0d8ef905fc46a7d012177fdb3ec0270dc087b1081c2380733f9a99"} Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.632386 4644 scope.go:117] "RemoveContainer" containerID="fad636f4441ce0356fc5f3a6ade703fd77a526c86ad71cea4cedeede610b6876" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.632560 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m77" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.655698 4644 scope.go:117] "RemoveContainer" containerID="b4a01a7261da737cb001cfe3784dbd29573d00f85736a47a4ce0206278664d5d" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.680541 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ffp5r" podStartSLOduration=3.227869954 podStartE2EDuration="6.68051869s" podCreationTimestamp="2026-02-04 09:27:52 +0000 UTC" firstStartedPulling="2026-02-04 09:27:54.576843933 +0000 UTC m=+2784.616901678" lastFinishedPulling="2026-02-04 09:27:58.029492659 +0000 UTC m=+2788.069550414" observedRunningTime="2026-02-04 09:27:58.645424446 +0000 UTC m=+2788.685482231" watchObservedRunningTime="2026-02-04 09:27:58.68051869 +0000 UTC m=+2788.720576445" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.687811 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.698556 4644 scope.go:117] "RemoveContainer" containerID="96676f00fa2edf713249e0efc5d9ad7da8528f5285dfcba1108f9d80c3964b62" Feb 04 09:27:58 crc kubenswrapper[4644]: I0204 09:27:58.699713 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-59m77"] Feb 04 09:28:00 crc kubenswrapper[4644]: I0204 09:28:00.670979 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" path="/var/lib/kubelet/pods/dd020b40-daef-4b1c-af94-3c5c43ee8fc9/volumes" Feb 04 09:28:02 crc kubenswrapper[4644]: I0204 09:28:02.824812 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:02 crc kubenswrapper[4644]: I0204 09:28:02.824945 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:02 crc kubenswrapper[4644]: I0204 09:28:02.874244 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:03 crc kubenswrapper[4644]: I0204 09:28:03.745125 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:04 crc kubenswrapper[4644]: I0204 09:28:04.711587 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.554837 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.555273 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.555357 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.556562 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.556721 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814" gracePeriod=600 Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.718298 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814" exitCode=0 Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.718381 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814"} Feb 04 09:28:05 crc kubenswrapper[4644]: I0204 09:28:05.719443 4644 scope.go:117] "RemoveContainer" containerID="796aff37e1f8ec9c65686f86fbda7e166c2a0072ead49fe93f6e2c844e2843bb" Feb 04 09:28:06 crc kubenswrapper[4644]: I0204 09:28:06.735891 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ffp5r" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="registry-server" containerID="cri-o://93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c" gracePeriod=2 Feb 04 09:28:06 crc kubenswrapper[4644]: I0204 09:28:06.736559 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b"} Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.171447 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.272785 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmwn\" (UniqueName: \"kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn\") pod \"056fe5d5-c973-4f22-b71f-c9821ec0f815\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.272962 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities\") pod \"056fe5d5-c973-4f22-b71f-c9821ec0f815\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.272991 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content\") pod \"056fe5d5-c973-4f22-b71f-c9821ec0f815\" (UID: \"056fe5d5-c973-4f22-b71f-c9821ec0f815\") " Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.273956 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities" (OuterVolumeSpecName: "utilities") pod "056fe5d5-c973-4f22-b71f-c9821ec0f815" (UID: "056fe5d5-c973-4f22-b71f-c9821ec0f815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.278657 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn" (OuterVolumeSpecName: "kube-api-access-pvmwn") pod "056fe5d5-c973-4f22-b71f-c9821ec0f815" (UID: "056fe5d5-c973-4f22-b71f-c9821ec0f815"). InnerVolumeSpecName "kube-api-access-pvmwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.306499 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "056fe5d5-c973-4f22-b71f-c9821ec0f815" (UID: "056fe5d5-c973-4f22-b71f-c9821ec0f815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.375776 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmwn\" (UniqueName: \"kubernetes.io/projected/056fe5d5-c973-4f22-b71f-c9821ec0f815-kube-api-access-pvmwn\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.375815 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.375828 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056fe5d5-c973-4f22-b71f-c9821ec0f815-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.745556 4644 generic.go:334] "Generic (PLEG): container finished" podID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerID="93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c" exitCode=0 Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.746762 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffp5r" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.749520 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerDied","Data":"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c"} Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.749609 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffp5r" event={"ID":"056fe5d5-c973-4f22-b71f-c9821ec0f815","Type":"ContainerDied","Data":"9f33fb0635104c2db6c55e36e1bfa069005353692988ed7d6e4d133e0a02f454"} Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.749640 4644 scope.go:117] "RemoveContainer" containerID="93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.769598 4644 scope.go:117] "RemoveContainer" containerID="57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.792494 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.798578 4644 scope.go:117] "RemoveContainer" containerID="450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.802521 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffp5r"] Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.861002 4644 scope.go:117] "RemoveContainer" containerID="93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c" Feb 04 09:28:07 crc kubenswrapper[4644]: E0204 09:28:07.861561 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c\": container with ID starting with 93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c not found: ID does not exist" containerID="93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.861603 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c"} err="failed to get container status \"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c\": rpc error: code = NotFound desc = could not find container \"93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c\": container with ID starting with 93def1a61959f629d01f9375dded81f604cb8946e89fd2e417ec4060b88b604c not found: ID does not exist" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.861631 4644 scope.go:117] "RemoveContainer" containerID="57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf" Feb 04 09:28:07 crc kubenswrapper[4644]: E0204 09:28:07.861983 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf\": container with ID starting with 57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf not found: ID does not exist" containerID="57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.862060 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf"} err="failed to get container status \"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf\": rpc error: code = NotFound desc = could not find container \"57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf\": container with ID starting with 57c5cf8e3e54e6556d0975374bff08df9f9f0b008661c52c194849b2d140f4bf not found: ID does not exist" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.862137 4644 scope.go:117] "RemoveContainer" containerID="450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c" Feb 04 09:28:07 crc kubenswrapper[4644]: E0204 09:28:07.862530 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c\": container with ID starting with 450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c not found: ID does not exist" containerID="450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c" Feb 04 09:28:07 crc kubenswrapper[4644]: I0204 09:28:07.862622 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c"} err="failed to get container status \"450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c\": rpc error: code = NotFound desc = could not find container \"450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c\": container with ID starting with 450dfe1ccc672747e387399b69092e82eeebbc60ba9b62d63a619423b109e09c not found: ID does not exist" Feb 04 09:28:08 crc kubenswrapper[4644]: I0204 09:28:08.673618 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" path="/var/lib/kubelet/pods/056fe5d5-c973-4f22-b71f-c9821ec0f815/volumes" Feb 04 09:28:32 crc kubenswrapper[4644]: I0204 09:28:32.958995 4644 generic.go:334] "Generic (PLEG): container finished" podID="feb1a5d9-f2df-4534-8a80-73d11c854b35" containerID="55a65bbece94eda37846633e0a1c1e89ec135c11a1d22dabd04bc1225c9e533f" exitCode=0 Feb 04 09:28:32 crc kubenswrapper[4644]: I0204 09:28:32.959125 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" event={"ID":"feb1a5d9-f2df-4534-8a80-73d11c854b35","Type":"ContainerDied","Data":"55a65bbece94eda37846633e0a1c1e89ec135c11a1d22dabd04bc1225c9e533f"} Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.427297 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.538944 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539114 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539210 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539264 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539311 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539477 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnctb\" (UniqueName: \"kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.539509 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle\") pod \"feb1a5d9-f2df-4534-8a80-73d11c854b35\" (UID: \"feb1a5d9-f2df-4534-8a80-73d11c854b35\") " Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.545797 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb" (OuterVolumeSpecName: "kube-api-access-rnctb") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "kube-api-access-rnctb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.550483 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.571404 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.605056 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.610284 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.628406 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory" (OuterVolumeSpecName: "inventory") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.635975 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "feb1a5d9-f2df-4534-8a80-73d11c854b35" (UID: "feb1a5d9-f2df-4534-8a80-73d11c854b35"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642626 4644 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642663 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642687 4644 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642700 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnctb\" (UniqueName: \"kubernetes.io/projected/feb1a5d9-f2df-4534-8a80-73d11c854b35-kube-api-access-rnctb\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642716 4644 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642728 4644 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.642741 4644 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/feb1a5d9-f2df-4534-8a80-73d11c854b35-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.978830 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" event={"ID":"feb1a5d9-f2df-4534-8a80-73d11c854b35","Type":"ContainerDied","Data":"91d96129b0dd8d34950fe3bd2ab9a806a252f88d9420b4bbe1912909a098e43d"} Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.979443 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d96129b0dd8d34950fe3bd2ab9a806a252f88d9420b4bbe1912909a098e43d" Feb 04 09:28:34 crc kubenswrapper[4644]: I0204 09:28:34.978936 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gv78k" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.894076 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895143 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="extract-utilities" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895160 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="extract-utilities" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895181 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895189 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895209 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="extract-content" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895217 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="extract-content" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895236 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="extract-utilities" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895243 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="extract-utilities" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895256 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="extract-content" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895264 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="extract-content" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895277 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895284 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: E0204 09:29:31.895294 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb1a5d9-f2df-4534-8a80-73d11c854b35" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895304 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb1a5d9-f2df-4534-8a80-73d11c854b35" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895531 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="056fe5d5-c973-4f22-b71f-c9821ec0f815" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895558 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb1a5d9-f2df-4534-8a80-73d11c854b35" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.895571 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd020b40-daef-4b1c-af94-3c5c43ee8fc9" containerName="registry-server" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.896281 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.898211 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.898229 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.899859 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-spf2h" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.903520 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 04 09:29:31 crc kubenswrapper[4644]: I0204 09:29:31.916314 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092716 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092791 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092839 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdgv\" (UniqueName: \"kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092858 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092885 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092906 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092926 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.092957 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.093010 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195041 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195117 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195191 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195252 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdgv\" (UniqueName: \"kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195279 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195319 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195371 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195403 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195447 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.195615 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.196244 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.196557 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.196573 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.196914 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.203792 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.203909 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.204564 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.220667 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdgv\" (UniqueName: \"kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.224212 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.511804 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 04 09:29:32 crc kubenswrapper[4644]: I0204 09:29:32.984818 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 04 09:29:33 crc kubenswrapper[4644]: I0204 09:29:33.495732 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a","Type":"ContainerStarted","Data":"2640e4985a32ec4e9e548e86058ca96c0b5a6186a4e2b56a0d39b586e1804313"} Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.172602 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz"] Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.174270 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.180072 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.180515 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.199169 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.199235 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wbj\" (UniqueName: \"kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.199469 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.207013 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz"] Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.301061 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.301232 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.301267 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wbj\" (UniqueName: \"kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.302749 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.361712 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wbj\" (UniqueName: \"kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.362329 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume\") pod \"collect-profiles-29503290-k5mzz\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:00 crc kubenswrapper[4644]: I0204 09:30:00.508637 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:03 crc kubenswrapper[4644]: E0204 09:30:03.615894 4644 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 04 09:30:03 crc kubenswrapper[4644]: E0204 09:30:03.618907 4644 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqdgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 09:30:03 crc kubenswrapper[4644]: E0204 09:30:03.620263 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" Feb 04 09:30:03 crc kubenswrapper[4644]: E0204 09:30:03.808894 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" Feb 04 09:30:04 crc kubenswrapper[4644]: I0204 09:30:04.137682 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz"] Feb 04 09:30:04 crc kubenswrapper[4644]: I0204 09:30:04.816418 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" event={"ID":"ac8d3768-43b5-4986-a31f-fe96035f28e7","Type":"ContainerStarted","Data":"33ce0fcd02fbbfe3af90121cdbc40128f29dde4d7991458cf3a7dd4f52619124"} Feb 04 09:30:04 crc kubenswrapper[4644]: I0204 09:30:04.816764 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" event={"ID":"ac8d3768-43b5-4986-a31f-fe96035f28e7","Type":"ContainerStarted","Data":"37edf5da2dd05e297882464c5b7e51348251961035f397ed07c2c3289c004878"} Feb 04 09:30:04 crc kubenswrapper[4644]: I0204 09:30:04.840850 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" podStartSLOduration=4.840829899 podStartE2EDuration="4.840829899s" podCreationTimestamp="2026-02-04 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:30:04.837795967 +0000 UTC m=+2914.877853722" watchObservedRunningTime="2026-02-04 09:30:04.840829899 +0000 UTC m=+2914.880887654" Feb 04 09:30:05 crc kubenswrapper[4644]: I0204 09:30:05.555606 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:30:05 crc kubenswrapper[4644]: I0204 09:30:05.555901 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:30:05 crc kubenswrapper[4644]: I0204 09:30:05.827116 4644 generic.go:334] "Generic (PLEG): container finished" podID="ac8d3768-43b5-4986-a31f-fe96035f28e7" containerID="33ce0fcd02fbbfe3af90121cdbc40128f29dde4d7991458cf3a7dd4f52619124" exitCode=0 Feb 04 09:30:05 crc kubenswrapper[4644]: I0204 09:30:05.827166 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" event={"ID":"ac8d3768-43b5-4986-a31f-fe96035f28e7","Type":"ContainerDied","Data":"33ce0fcd02fbbfe3af90121cdbc40128f29dde4d7991458cf3a7dd4f52619124"} Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.167134 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.340164 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6wbj\" (UniqueName: \"kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj\") pod \"ac8d3768-43b5-4986-a31f-fe96035f28e7\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.340661 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume\") pod \"ac8d3768-43b5-4986-a31f-fe96035f28e7\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.340718 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume\") pod \"ac8d3768-43b5-4986-a31f-fe96035f28e7\" (UID: \"ac8d3768-43b5-4986-a31f-fe96035f28e7\") " Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.341347 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac8d3768-43b5-4986-a31f-fe96035f28e7" (UID: "ac8d3768-43b5-4986-a31f-fe96035f28e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.346564 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj" (OuterVolumeSpecName: "kube-api-access-t6wbj") pod "ac8d3768-43b5-4986-a31f-fe96035f28e7" (UID: "ac8d3768-43b5-4986-a31f-fe96035f28e7"). InnerVolumeSpecName "kube-api-access-t6wbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.347480 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac8d3768-43b5-4986-a31f-fe96035f28e7" (UID: "ac8d3768-43b5-4986-a31f-fe96035f28e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.443006 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac8d3768-43b5-4986-a31f-fe96035f28e7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.443056 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac8d3768-43b5-4986-a31f-fe96035f28e7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.443069 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6wbj\" (UniqueName: \"kubernetes.io/projected/ac8d3768-43b5-4986-a31f-fe96035f28e7-kube-api-access-t6wbj\") on node \"crc\" DevicePath \"\"" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.852936 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" event={"ID":"ac8d3768-43b5-4986-a31f-fe96035f28e7","Type":"ContainerDied","Data":"37edf5da2dd05e297882464c5b7e51348251961035f397ed07c2c3289c004878"} Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.853007 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37edf5da2dd05e297882464c5b7e51348251961035f397ed07c2c3289c004878" Feb 04 09:30:07 crc kubenswrapper[4644]: I0204 09:30:07.853026 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503290-k5mzz" Feb 04 09:30:08 crc kubenswrapper[4644]: I0204 09:30:08.261928 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9"] Feb 04 09:30:08 crc kubenswrapper[4644]: I0204 09:30:08.270822 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503245-l89h9"] Feb 04 09:30:08 crc kubenswrapper[4644]: I0204 09:30:08.676972 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b231d551-35a7-406f-b661-914bad0ecec5" path="/var/lib/kubelet/pods/b231d551-35a7-406f-b661-914bad0ecec5/volumes" Feb 04 09:30:16 crc kubenswrapper[4644]: I0204 09:30:16.662447 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:30:18 crc kubenswrapper[4644]: I0204 09:30:18.949213 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a","Type":"ContainerStarted","Data":"101cf3bc772c5640b819a2c2fc997c9d8f98dd037fdd66f52f914b84315540fb"} Feb 04 09:30:18 crc kubenswrapper[4644]: I0204 09:30:18.970894 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.322079218 podStartE2EDuration="48.970870136s" podCreationTimestamp="2026-02-04 09:29:30 +0000 UTC" firstStartedPulling="2026-02-04 09:29:32.990215749 +0000 UTC m=+2883.030273504" lastFinishedPulling="2026-02-04 09:30:17.639006537 +0000 UTC m=+2927.679064422" observedRunningTime="2026-02-04 09:30:18.965628884 +0000 UTC m=+2929.005686639" watchObservedRunningTime="2026-02-04 09:30:18.970870136 +0000 UTC m=+2929.010927901" Feb 04 09:30:35 crc kubenswrapper[4644]: I0204 09:30:35.555365 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:30:35 crc kubenswrapper[4644]: I0204 09:30:35.555997 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:31:02 crc kubenswrapper[4644]: I0204 09:31:02.430822 4644 scope.go:117] "RemoveContainer" containerID="32a3e9e2294c1c0230ce715cd5aee2238ccf3367f57b06e4cba669b1652df0cd" Feb 04 09:31:05 crc kubenswrapper[4644]: I0204 09:31:05.554670 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:31:05 crc kubenswrapper[4644]: I0204 09:31:05.555142 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:31:05 crc kubenswrapper[4644]: I0204 09:31:05.555184 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:31:05 crc kubenswrapper[4644]: I0204 09:31:05.555906 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:31:05 crc kubenswrapper[4644]: I0204 09:31:05.555968 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" gracePeriod=600 Feb 04 09:31:05 crc kubenswrapper[4644]: E0204 09:31:05.672822 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:31:06 crc kubenswrapper[4644]: I0204 09:31:06.370605 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" exitCode=0 Feb 04 09:31:06 crc kubenswrapper[4644]: I0204 09:31:06.370678 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b"} Feb 04 09:31:06 crc kubenswrapper[4644]: I0204 09:31:06.370982 4644 scope.go:117] "RemoveContainer" containerID="c5b0c705738baa9eb3ae9d94927c0dc222976cbc7019ec21bca54f41a8f23814" Feb 04 09:31:06 crc kubenswrapper[4644]: I0204 09:31:06.371617 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:31:06 crc kubenswrapper[4644]: E0204 09:31:06.372004 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:31:19 crc kubenswrapper[4644]: I0204 09:31:19.660872 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:31:19 crc kubenswrapper[4644]: E0204 09:31:19.661668 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:31:31 crc kubenswrapper[4644]: I0204 09:31:31.679180 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:31:31 crc kubenswrapper[4644]: E0204 09:31:31.680521 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:31:46 crc kubenswrapper[4644]: I0204 09:31:46.659934 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:31:46 crc kubenswrapper[4644]: E0204 09:31:46.660624 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:31:59 crc kubenswrapper[4644]: I0204 09:31:59.660518 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:31:59 crc kubenswrapper[4644]: E0204 09:31:59.661147 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:32:13 crc kubenswrapper[4644]: I0204 09:32:13.660477 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:32:13 crc kubenswrapper[4644]: E0204 09:32:13.661355 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:32:26 crc kubenswrapper[4644]: I0204 09:32:26.660168 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:32:26 crc kubenswrapper[4644]: E0204 09:32:26.660953 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.690289 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:35 crc kubenswrapper[4644]: E0204 09:32:35.692163 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8d3768-43b5-4986-a31f-fe96035f28e7" containerName="collect-profiles" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.692263 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8d3768-43b5-4986-a31f-fe96035f28e7" containerName="collect-profiles" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.692619 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8d3768-43b5-4986-a31f-fe96035f28e7" containerName="collect-profiles" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.694095 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.702411 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.807667 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.807736 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.807867 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vcm\" (UniqueName: \"kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.909092 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.909648 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.909779 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.909937 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vcm\" (UniqueName: \"kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.910055 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:35 crc kubenswrapper[4644]: I0204 09:32:35.933747 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vcm\" (UniqueName: \"kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm\") pod \"community-operators-bvl9x\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:36 crc kubenswrapper[4644]: I0204 09:32:36.030425 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:36 crc kubenswrapper[4644]: I0204 09:32:36.800804 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:37 crc kubenswrapper[4644]: I0204 09:32:37.191435 4644 generic.go:334] "Generic (PLEG): container finished" podID="552450e0-a82e-4826-b912-c4327e38a86d" containerID="ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9" exitCode=0 Feb 04 09:32:37 crc kubenswrapper[4644]: I0204 09:32:37.191599 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerDied","Data":"ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9"} Feb 04 09:32:37 crc kubenswrapper[4644]: I0204 09:32:37.192900 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerStarted","Data":"0849de278f5a134b7cdd6420afcd0e8fd295d596d8ac35dbc6d665af08847766"} Feb 04 09:32:38 crc kubenswrapper[4644]: I0204 09:32:38.202897 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerStarted","Data":"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69"} Feb 04 09:32:39 crc kubenswrapper[4644]: I0204 09:32:39.660616 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:32:39 crc kubenswrapper[4644]: E0204 09:32:39.661088 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:32:40 crc kubenswrapper[4644]: I0204 09:32:40.220630 4644 generic.go:334] "Generic (PLEG): container finished" podID="552450e0-a82e-4826-b912-c4327e38a86d" containerID="c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69" exitCode=0 Feb 04 09:32:40 crc kubenswrapper[4644]: I0204 09:32:40.220701 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerDied","Data":"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69"} Feb 04 09:32:41 crc kubenswrapper[4644]: I0204 09:32:41.232108 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerStarted","Data":"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b"} Feb 04 09:32:41 crc kubenswrapper[4644]: I0204 09:32:41.255507 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bvl9x" podStartSLOduration=2.787980018 podStartE2EDuration="6.255481931s" podCreationTimestamp="2026-02-04 09:32:35 +0000 UTC" firstStartedPulling="2026-02-04 09:32:37.200045781 +0000 UTC m=+3067.240103536" lastFinishedPulling="2026-02-04 09:32:40.667547694 +0000 UTC m=+3070.707605449" observedRunningTime="2026-02-04 09:32:41.252445799 +0000 UTC m=+3071.292503554" watchObservedRunningTime="2026-02-04 09:32:41.255481931 +0000 UTC m=+3071.295539696" Feb 04 09:32:46 crc kubenswrapper[4644]: I0204 09:32:46.031864 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:46 crc kubenswrapper[4644]: I0204 09:32:46.033722 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:46 crc kubenswrapper[4644]: I0204 09:32:46.095060 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:46 crc kubenswrapper[4644]: I0204 09:32:46.352499 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:46 crc kubenswrapper[4644]: I0204 09:32:46.419895 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:48 crc kubenswrapper[4644]: I0204 09:32:48.290097 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bvl9x" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="registry-server" containerID="cri-o://d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b" gracePeriod=2 Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.086402 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.189720 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content\") pod \"552450e0-a82e-4826-b912-c4327e38a86d\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.189796 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities\") pod \"552450e0-a82e-4826-b912-c4327e38a86d\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.189946 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vcm\" (UniqueName: \"kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm\") pod \"552450e0-a82e-4826-b912-c4327e38a86d\" (UID: \"552450e0-a82e-4826-b912-c4327e38a86d\") " Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.190959 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities" (OuterVolumeSpecName: "utilities") pod "552450e0-a82e-4826-b912-c4327e38a86d" (UID: "552450e0-a82e-4826-b912-c4327e38a86d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.196447 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm" (OuterVolumeSpecName: "kube-api-access-z7vcm") pod "552450e0-a82e-4826-b912-c4327e38a86d" (UID: "552450e0-a82e-4826-b912-c4327e38a86d"). InnerVolumeSpecName "kube-api-access-z7vcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.262135 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "552450e0-a82e-4826-b912-c4327e38a86d" (UID: "552450e0-a82e-4826-b912-c4327e38a86d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.292950 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.292980 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552450e0-a82e-4826-b912-c4327e38a86d-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.292995 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vcm\" (UniqueName: \"kubernetes.io/projected/552450e0-a82e-4826-b912-c4327e38a86d-kube-api-access-z7vcm\") on node \"crc\" DevicePath \"\"" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.301774 4644 generic.go:334] "Generic (PLEG): container finished" podID="552450e0-a82e-4826-b912-c4327e38a86d" containerID="d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b" exitCode=0 Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.301848 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvl9x" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.301868 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerDied","Data":"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b"} Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.302371 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvl9x" event={"ID":"552450e0-a82e-4826-b912-c4327e38a86d","Type":"ContainerDied","Data":"0849de278f5a134b7cdd6420afcd0e8fd295d596d8ac35dbc6d665af08847766"} Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.302394 4644 scope.go:117] "RemoveContainer" containerID="d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.329764 4644 scope.go:117] "RemoveContainer" containerID="c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.354095 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.354460 4644 scope.go:117] "RemoveContainer" containerID="ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.369460 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bvl9x"] Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.410860 4644 scope.go:117] "RemoveContainer" containerID="d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b" Feb 04 09:32:49 crc kubenswrapper[4644]: E0204 09:32:49.411370 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b\": container with ID starting with d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b not found: ID does not exist" containerID="d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.411405 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b"} err="failed to get container status \"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b\": rpc error: code = NotFound desc = could not find container \"d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b\": container with ID starting with d57fc761a94902ab8cc98bb231633a1b8a66d7da645b5bb6321133f1f057315b not found: ID does not exist" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.411435 4644 scope.go:117] "RemoveContainer" containerID="c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69" Feb 04 09:32:49 crc kubenswrapper[4644]: E0204 09:32:49.411754 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69\": container with ID starting with c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69 not found: ID does not exist" containerID="c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.411882 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69"} err="failed to get container status \"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69\": rpc error: code = NotFound desc = could not find container \"c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69\": container with ID starting with c0a10ffbf3675e1fb4f97eb30a5e33e9ad688462ecb256456e894a2ed6f49b69 not found: ID does not exist" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.411993 4644 scope.go:117] "RemoveContainer" containerID="ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9" Feb 04 09:32:49 crc kubenswrapper[4644]: E0204 09:32:49.412367 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9\": container with ID starting with ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9 not found: ID does not exist" containerID="ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9" Feb 04 09:32:49 crc kubenswrapper[4644]: I0204 09:32:49.412400 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9"} err="failed to get container status \"ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9\": rpc error: code = NotFound desc = could not find container \"ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9\": container with ID starting with ad6745a7c7ae63b7dfac66102b877ae0a17f637488792caf8f2e8fd47d9766f9 not found: ID does not exist" Feb 04 09:32:50 crc kubenswrapper[4644]: I0204 09:32:50.680879 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552450e0-a82e-4826-b912-c4327e38a86d" path="/var/lib/kubelet/pods/552450e0-a82e-4826-b912-c4327e38a86d/volumes" Feb 04 09:32:52 crc kubenswrapper[4644]: I0204 09:32:52.660895 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:32:52 crc kubenswrapper[4644]: E0204 09:32:52.661200 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:33:05 crc kubenswrapper[4644]: I0204 09:33:05.659880 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:33:05 crc kubenswrapper[4644]: E0204 09:33:05.660619 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:33:18 crc kubenswrapper[4644]: I0204 09:33:18.662174 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:33:18 crc kubenswrapper[4644]: E0204 09:33:18.667407 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:33:29 crc kubenswrapper[4644]: I0204 09:33:29.661073 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:33:29 crc kubenswrapper[4644]: E0204 09:33:29.661972 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:33:41 crc kubenswrapper[4644]: I0204 09:33:41.659409 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:33:41 crc kubenswrapper[4644]: E0204 09:33:41.660174 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:33:53 crc kubenswrapper[4644]: I0204 09:33:53.660155 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:33:53 crc kubenswrapper[4644]: E0204 09:33:53.660921 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:04 crc kubenswrapper[4644]: I0204 09:34:04.660602 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:34:04 crc kubenswrapper[4644]: E0204 09:34:04.661228 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:16 crc kubenswrapper[4644]: I0204 09:34:16.659596 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:34:16 crc kubenswrapper[4644]: E0204 09:34:16.660150 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:29 crc kubenswrapper[4644]: I0204 09:34:29.659771 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:34:29 crc kubenswrapper[4644]: E0204 09:34:29.660549 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:44 crc kubenswrapper[4644]: I0204 09:34:44.660502 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:34:44 crc kubenswrapper[4644]: E0204 09:34:44.662219 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.653264 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:34:54 crc kubenswrapper[4644]: E0204 09:34:54.654337 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="registry-server" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.654353 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="registry-server" Feb 04 09:34:54 crc kubenswrapper[4644]: E0204 09:34:54.654390 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="extract-utilities" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.654399 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="extract-utilities" Feb 04 09:34:54 crc kubenswrapper[4644]: E0204 09:34:54.654425 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="extract-content" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.654433 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="extract-content" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.654635 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="552450e0-a82e-4826-b912-c4327e38a86d" containerName="registry-server" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.657103 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.691022 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.818051 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.818487 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.818626 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cd8m\" (UniqueName: \"kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.920919 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.921028 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.921108 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cd8m\" (UniqueName: \"kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.921688 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.921793 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.943735 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cd8m\" (UniqueName: \"kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m\") pod \"redhat-operators-ch95j\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:54 crc kubenswrapper[4644]: I0204 09:34:54.990706 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:34:55 crc kubenswrapper[4644]: I0204 09:34:55.594887 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:34:56 crc kubenswrapper[4644]: I0204 09:34:56.379710 4644 generic.go:334] "Generic (PLEG): container finished" podID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerID="5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf" exitCode=0 Feb 04 09:34:56 crc kubenswrapper[4644]: I0204 09:34:56.379749 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerDied","Data":"5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf"} Feb 04 09:34:56 crc kubenswrapper[4644]: I0204 09:34:56.379937 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerStarted","Data":"08e65b373d470cc538cfedb562bac2e66a5fc9a568404a4e1910f226e4a9d68b"} Feb 04 09:34:56 crc kubenswrapper[4644]: I0204 09:34:56.660206 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:34:56 crc kubenswrapper[4644]: E0204 09:34:56.660736 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:34:57 crc kubenswrapper[4644]: I0204 09:34:57.390499 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerStarted","Data":"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c"} Feb 04 09:35:04 crc kubenswrapper[4644]: I0204 09:35:04.464338 4644 generic.go:334] "Generic (PLEG): container finished" podID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerID="53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c" exitCode=0 Feb 04 09:35:04 crc kubenswrapper[4644]: I0204 09:35:04.464359 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerDied","Data":"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c"} Feb 04 09:35:05 crc kubenswrapper[4644]: I0204 09:35:05.476135 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerStarted","Data":"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d"} Feb 04 09:35:05 crc kubenswrapper[4644]: I0204 09:35:05.505576 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ch95j" podStartSLOduration=3.010517031 podStartE2EDuration="11.505550559s" podCreationTimestamp="2026-02-04 09:34:54 +0000 UTC" firstStartedPulling="2026-02-04 09:34:56.381499611 +0000 UTC m=+3206.421557366" lastFinishedPulling="2026-02-04 09:35:04.876533139 +0000 UTC m=+3214.916590894" observedRunningTime="2026-02-04 09:35:05.497286415 +0000 UTC m=+3215.537344180" watchObservedRunningTime="2026-02-04 09:35:05.505550559 +0000 UTC m=+3215.545608314" Feb 04 09:35:07 crc kubenswrapper[4644]: I0204 09:35:07.660304 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:35:07 crc kubenswrapper[4644]: E0204 09:35:07.660866 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:35:14 crc kubenswrapper[4644]: I0204 09:35:14.990859 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:14 crc kubenswrapper[4644]: I0204 09:35:14.991470 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:16 crc kubenswrapper[4644]: I0204 09:35:16.033114 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ch95j" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" probeResult="failure" output=< Feb 04 09:35:16 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:35:16 crc kubenswrapper[4644]: > Feb 04 09:35:20 crc kubenswrapper[4644]: I0204 09:35:20.667013 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:35:20 crc kubenswrapper[4644]: E0204 09:35:20.667843 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:35:26 crc kubenswrapper[4644]: I0204 09:35:26.043957 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ch95j" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" probeResult="failure" output=< Feb 04 09:35:26 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:35:26 crc kubenswrapper[4644]: > Feb 04 09:35:31 crc kubenswrapper[4644]: I0204 09:35:31.660421 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:35:31 crc kubenswrapper[4644]: E0204 09:35:31.661305 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:35:36 crc kubenswrapper[4644]: I0204 09:35:36.049719 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ch95j" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" probeResult="failure" output=< Feb 04 09:35:36 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:35:36 crc kubenswrapper[4644]: > Feb 04 09:35:44 crc kubenswrapper[4644]: I0204 09:35:44.663840 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:35:44 crc kubenswrapper[4644]: E0204 09:35:44.664629 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:35:45 crc kubenswrapper[4644]: I0204 09:35:45.043845 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:45 crc kubenswrapper[4644]: I0204 09:35:45.093363 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:45 crc kubenswrapper[4644]: I0204 09:35:45.291505 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:35:46 crc kubenswrapper[4644]: I0204 09:35:46.851108 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ch95j" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" containerID="cri-o://d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d" gracePeriod=2 Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.582820 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.627873 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cd8m\" (UniqueName: \"kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m\") pod \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.628042 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content\") pod \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.628078 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities\") pod \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\" (UID: \"dbff3c2e-78b3-4773-b103-7dc5583c79f2\") " Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.629256 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities" (OuterVolumeSpecName: "utilities") pod "dbff3c2e-78b3-4773-b103-7dc5583c79f2" (UID: "dbff3c2e-78b3-4773-b103-7dc5583c79f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.635453 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m" (OuterVolumeSpecName: "kube-api-access-8cd8m") pod "dbff3c2e-78b3-4773-b103-7dc5583c79f2" (UID: "dbff3c2e-78b3-4773-b103-7dc5583c79f2"). InnerVolumeSpecName "kube-api-access-8cd8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.730018 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cd8m\" (UniqueName: \"kubernetes.io/projected/dbff3c2e-78b3-4773-b103-7dc5583c79f2-kube-api-access-8cd8m\") on node \"crc\" DevicePath \"\"" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.730059 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.780890 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbff3c2e-78b3-4773-b103-7dc5583c79f2" (UID: "dbff3c2e-78b3-4773-b103-7dc5583c79f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.831379 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbff3c2e-78b3-4773-b103-7dc5583c79f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.861681 4644 generic.go:334] "Generic (PLEG): container finished" podID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerID="d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d" exitCode=0 Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.861728 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerDied","Data":"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d"} Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.861773 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch95j" event={"ID":"dbff3c2e-78b3-4773-b103-7dc5583c79f2","Type":"ContainerDied","Data":"08e65b373d470cc538cfedb562bac2e66a5fc9a568404a4e1910f226e4a9d68b"} Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.861789 4644 scope.go:117] "RemoveContainer" containerID="d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.861784 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch95j" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.890837 4644 scope.go:117] "RemoveContainer" containerID="53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.902597 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.914018 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ch95j"] Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.962687 4644 scope.go:117] "RemoveContainer" containerID="5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.983628 4644 scope.go:117] "RemoveContainer" containerID="d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d" Feb 04 09:35:47 crc kubenswrapper[4644]: E0204 09:35:47.984178 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d\": container with ID starting with d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d not found: ID does not exist" containerID="d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.984231 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d"} err="failed to get container status \"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d\": rpc error: code = NotFound desc = could not find container \"d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d\": container with ID starting with d0e88e5bffedb482914c0604e513d00b703f51a4edb876a3c79b516ab709d99d not found: ID does not exist" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.984268 4644 scope.go:117] "RemoveContainer" containerID="53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c" Feb 04 09:35:47 crc kubenswrapper[4644]: E0204 09:35:47.984869 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c\": container with ID starting with 53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c not found: ID does not exist" containerID="53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.984948 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c"} err="failed to get container status \"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c\": rpc error: code = NotFound desc = could not find container \"53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c\": container with ID starting with 53d1bdb61a3e758d54d84485892c7cf493c724542bcc82f8a02ab052e3d1e91c not found: ID does not exist" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.984985 4644 scope.go:117] "RemoveContainer" containerID="5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf" Feb 04 09:35:47 crc kubenswrapper[4644]: E0204 09:35:47.985517 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf\": container with ID starting with 5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf not found: ID does not exist" containerID="5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf" Feb 04 09:35:47 crc kubenswrapper[4644]: I0204 09:35:47.985546 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf"} err="failed to get container status \"5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf\": rpc error: code = NotFound desc = could not find container \"5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf\": container with ID starting with 5690713af007b7b9b85173fa7af8f93814fdc579889b068c3aa925c21acc05bf not found: ID does not exist" Feb 04 09:35:48 crc kubenswrapper[4644]: I0204 09:35:48.675584 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" path="/var/lib/kubelet/pods/dbff3c2e-78b3-4773-b103-7dc5583c79f2/volumes" Feb 04 09:35:58 crc kubenswrapper[4644]: I0204 09:35:58.660254 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:35:58 crc kubenswrapper[4644]: E0204 09:35:58.660987 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:36:11 crc kubenswrapper[4644]: I0204 09:36:11.660123 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:36:12 crc kubenswrapper[4644]: I0204 09:36:12.071162 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2"} Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.314955 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:15 crc kubenswrapper[4644]: E0204 09:38:15.315992 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.316011 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" Feb 04 09:38:15 crc kubenswrapper[4644]: E0204 09:38:15.316027 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="extract-utilities" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.316036 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="extract-utilities" Feb 04 09:38:15 crc kubenswrapper[4644]: E0204 09:38:15.316075 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="extract-content" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.316083 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="extract-content" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.316290 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbff3c2e-78b3-4773-b103-7dc5583c79f2" containerName="registry-server" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.317932 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.328201 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.474156 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxb4\" (UniqueName: \"kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.474371 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.474470 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.576402 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szxb4\" (UniqueName: \"kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.576809 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.576894 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.577429 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.577439 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.599472 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szxb4\" (UniqueName: \"kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4\") pod \"certified-operators-slhf2\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:15 crc kubenswrapper[4644]: I0204 09:38:15.647871 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:16 crc kubenswrapper[4644]: I0204 09:38:16.164950 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:17 crc kubenswrapper[4644]: I0204 09:38:17.204962 4644 generic.go:334] "Generic (PLEG): container finished" podID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerID="9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a" exitCode=0 Feb 04 09:38:17 crc kubenswrapper[4644]: I0204 09:38:17.205147 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerDied","Data":"9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a"} Feb 04 09:38:17 crc kubenswrapper[4644]: I0204 09:38:17.205219 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerStarted","Data":"2b7cec62e607ee152b5fcfdfbac2d97ad4c7bc71d5c99da0bb8e2960c9b421c8"} Feb 04 09:38:17 crc kubenswrapper[4644]: I0204 09:38:17.207977 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:38:18 crc kubenswrapper[4644]: I0204 09:38:18.214895 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerStarted","Data":"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c"} Feb 04 09:38:20 crc kubenswrapper[4644]: I0204 09:38:20.232586 4644 generic.go:334] "Generic (PLEG): container finished" podID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerID="ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c" exitCode=0 Feb 04 09:38:20 crc kubenswrapper[4644]: I0204 09:38:20.232691 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerDied","Data":"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c"} Feb 04 09:38:21 crc kubenswrapper[4644]: I0204 09:38:21.243789 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerStarted","Data":"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd"} Feb 04 09:38:21 crc kubenswrapper[4644]: I0204 09:38:21.269836 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-slhf2" podStartSLOduration=2.769611776 podStartE2EDuration="6.269811174s" podCreationTimestamp="2026-02-04 09:38:15 +0000 UTC" firstStartedPulling="2026-02-04 09:38:17.207625322 +0000 UTC m=+3407.247683087" lastFinishedPulling="2026-02-04 09:38:20.70782472 +0000 UTC m=+3410.747882485" observedRunningTime="2026-02-04 09:38:21.260221474 +0000 UTC m=+3411.300279249" watchObservedRunningTime="2026-02-04 09:38:21.269811174 +0000 UTC m=+3411.309868929" Feb 04 09:38:25 crc kubenswrapper[4644]: I0204 09:38:25.649671 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:25 crc kubenswrapper[4644]: I0204 09:38:25.650163 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:25 crc kubenswrapper[4644]: I0204 09:38:25.695889 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:26 crc kubenswrapper[4644]: I0204 09:38:26.328803 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:26 crc kubenswrapper[4644]: I0204 09:38:26.390620 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:28 crc kubenswrapper[4644]: I0204 09:38:28.299608 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-slhf2" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="registry-server" containerID="cri-o://1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd" gracePeriod=2 Feb 04 09:38:28 crc kubenswrapper[4644]: I0204 09:38:28.911753 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.042793 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities\") pod \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.043049 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szxb4\" (UniqueName: \"kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4\") pod \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.043226 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content\") pod \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\" (UID: \"63a49cfd-fc02-4e6b-9f1c-4a79262155b7\") " Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.043864 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities" (OuterVolumeSpecName: "utilities") pod "63a49cfd-fc02-4e6b-9f1c-4a79262155b7" (UID: "63a49cfd-fc02-4e6b-9f1c-4a79262155b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.050131 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4" (OuterVolumeSpecName: "kube-api-access-szxb4") pod "63a49cfd-fc02-4e6b-9f1c-4a79262155b7" (UID: "63a49cfd-fc02-4e6b-9f1c-4a79262155b7"). InnerVolumeSpecName "kube-api-access-szxb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.093892 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63a49cfd-fc02-4e6b-9f1c-4a79262155b7" (UID: "63a49cfd-fc02-4e6b-9f1c-4a79262155b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.145279 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.145589 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.145682 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szxb4\" (UniqueName: \"kubernetes.io/projected/63a49cfd-fc02-4e6b-9f1c-4a79262155b7-kube-api-access-szxb4\") on node \"crc\" DevicePath \"\"" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.307643 4644 generic.go:334] "Generic (PLEG): container finished" podID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerID="1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd" exitCode=0 Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.307681 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerDied","Data":"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd"} Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.307706 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slhf2" event={"ID":"63a49cfd-fc02-4e6b-9f1c-4a79262155b7","Type":"ContainerDied","Data":"2b7cec62e607ee152b5fcfdfbac2d97ad4c7bc71d5c99da0bb8e2960c9b421c8"} Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.307723 4644 scope.go:117] "RemoveContainer" containerID="1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.307767 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slhf2" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.352408 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.352558 4644 scope.go:117] "RemoveContainer" containerID="ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.362513 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-slhf2"] Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.397522 4644 scope.go:117] "RemoveContainer" containerID="9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.416508 4644 scope.go:117] "RemoveContainer" containerID="1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd" Feb 04 09:38:29 crc kubenswrapper[4644]: E0204 09:38:29.416989 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd\": container with ID starting with 1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd not found: ID does not exist" containerID="1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.417045 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd"} err="failed to get container status \"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd\": rpc error: code = NotFound desc = could not find container \"1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd\": container with ID starting with 1ca42492f796ae084795ee3d577b23964d46842bfaa896cda36b5f904ac586bd not found: ID does not exist" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.417079 4644 scope.go:117] "RemoveContainer" containerID="ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c" Feb 04 09:38:29 crc kubenswrapper[4644]: E0204 09:38:29.417567 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c\": container with ID starting with ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c not found: ID does not exist" containerID="ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.417596 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c"} err="failed to get container status \"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c\": rpc error: code = NotFound desc = could not find container \"ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c\": container with ID starting with ab42551a9bcf09a345f42e0fedf1e0302bcf7aea4e649e01a9a0acd5b7f0d75c not found: ID does not exist" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.417618 4644 scope.go:117] "RemoveContainer" containerID="9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a" Feb 04 09:38:29 crc kubenswrapper[4644]: E0204 09:38:29.417991 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a\": container with ID starting with 9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a not found: ID does not exist" containerID="9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a" Feb 04 09:38:29 crc kubenswrapper[4644]: I0204 09:38:29.418018 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a"} err="failed to get container status \"9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a\": rpc error: code = NotFound desc = could not find container \"9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a\": container with ID starting with 9f5fa249234c31fe2a8d112cc258b9a559df1530a383f4019c0a6d4fd3b6bd0a not found: ID does not exist" Feb 04 09:38:30 crc kubenswrapper[4644]: I0204 09:38:30.671463 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" path="/var/lib/kubelet/pods/63a49cfd-fc02-4e6b-9f1c-4a79262155b7/volumes" Feb 04 09:38:35 crc kubenswrapper[4644]: I0204 09:38:35.555455 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:38:35 crc kubenswrapper[4644]: I0204 09:38:35.556707 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:39:05 crc kubenswrapper[4644]: I0204 09:39:05.555408 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:39:05 crc kubenswrapper[4644]: I0204 09:39:05.555908 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.555122 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.555701 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.555750 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.556455 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.556498 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2" gracePeriod=600 Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.928967 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2" exitCode=0 Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.929108 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2"} Feb 04 09:39:35 crc kubenswrapper[4644]: I0204 09:39:35.929381 4644 scope.go:117] "RemoveContainer" containerID="c271c3779b5c6b880966b975cfccb9acba2aa8036882096653cc8acbeeac5d8b" Feb 04 09:39:36 crc kubenswrapper[4644]: I0204 09:39:36.962716 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66"} Feb 04 09:42:05 crc kubenswrapper[4644]: I0204 09:42:05.554932 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:42:05 crc kubenswrapper[4644]: I0204 09:42:05.555611 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:42:35 crc kubenswrapper[4644]: I0204 09:42:35.555156 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:42:35 crc kubenswrapper[4644]: I0204 09:42:35.555711 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.118252 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:42:47 crc kubenswrapper[4644]: E0204 09:42:47.119174 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="registry-server" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.119187 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="registry-server" Feb 04 09:42:47 crc kubenswrapper[4644]: E0204 09:42:47.119206 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="extract-content" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.119212 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="extract-content" Feb 04 09:42:47 crc kubenswrapper[4644]: E0204 09:42:47.119236 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="extract-utilities" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.119244 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="extract-utilities" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.119640 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a49cfd-fc02-4e6b-9f1c-4a79262155b7" containerName="registry-server" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.120915 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.133490 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.188503 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.188600 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bpd\" (UniqueName: \"kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.188698 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.290283 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.290451 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.290523 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bpd\" (UniqueName: \"kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.290864 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.291183 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.324362 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bpd\" (UniqueName: \"kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd\") pod \"community-operators-7hqqb\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:47 crc kubenswrapper[4644]: I0204 09:42:47.446465 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:48 crc kubenswrapper[4644]: I0204 09:42:48.140054 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:42:48 crc kubenswrapper[4644]: W0204 09:42:48.149636 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852784ec_3433_494b_9c40_e9fa8d6ff7ad.slice/crio-3008daedcd3ec50f91409d32dee01e976126aeef3fa4f7efde92d1fabc20c1d0 WatchSource:0}: Error finding container 3008daedcd3ec50f91409d32dee01e976126aeef3fa4f7efde92d1fabc20c1d0: Status 404 returned error can't find the container with id 3008daedcd3ec50f91409d32dee01e976126aeef3fa4f7efde92d1fabc20c1d0 Feb 04 09:42:48 crc kubenswrapper[4644]: I0204 09:42:48.880781 4644 generic.go:334] "Generic (PLEG): container finished" podID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerID="6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822" exitCode=0 Feb 04 09:42:48 crc kubenswrapper[4644]: I0204 09:42:48.880840 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerDied","Data":"6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822"} Feb 04 09:42:48 crc kubenswrapper[4644]: I0204 09:42:48.880867 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerStarted","Data":"3008daedcd3ec50f91409d32dee01e976126aeef3fa4f7efde92d1fabc20c1d0"} Feb 04 09:42:50 crc kubenswrapper[4644]: I0204 09:42:50.906749 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerStarted","Data":"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d"} Feb 04 09:42:52 crc kubenswrapper[4644]: I0204 09:42:52.923174 4644 generic.go:334] "Generic (PLEG): container finished" podID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerID="07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d" exitCode=0 Feb 04 09:42:52 crc kubenswrapper[4644]: I0204 09:42:52.923484 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerDied","Data":"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d"} Feb 04 09:42:53 crc kubenswrapper[4644]: I0204 09:42:53.932770 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerStarted","Data":"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1"} Feb 04 09:42:53 crc kubenswrapper[4644]: I0204 09:42:53.953894 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hqqb" podStartSLOduration=2.479245196 podStartE2EDuration="6.953876138s" podCreationTimestamp="2026-02-04 09:42:47 +0000 UTC" firstStartedPulling="2026-02-04 09:42:48.883744977 +0000 UTC m=+3678.923802732" lastFinishedPulling="2026-02-04 09:42:53.358375919 +0000 UTC m=+3683.398433674" observedRunningTime="2026-02-04 09:42:53.953229309 +0000 UTC m=+3683.993287054" watchObservedRunningTime="2026-02-04 09:42:53.953876138 +0000 UTC m=+3683.993933913" Feb 04 09:42:57 crc kubenswrapper[4644]: I0204 09:42:57.447385 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:57 crc kubenswrapper[4644]: I0204 09:42:57.447879 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:42:58 crc kubenswrapper[4644]: I0204 09:42:58.502736 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7hqqb" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="registry-server" probeResult="failure" output=< Feb 04 09:42:58 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:42:58 crc kubenswrapper[4644]: > Feb 04 09:43:05 crc kubenswrapper[4644]: I0204 09:43:05.555099 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:43:05 crc kubenswrapper[4644]: I0204 09:43:05.555659 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:43:05 crc kubenswrapper[4644]: I0204 09:43:05.555732 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:43:05 crc kubenswrapper[4644]: I0204 09:43:05.556722 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:43:05 crc kubenswrapper[4644]: I0204 09:43:05.556797 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" gracePeriod=600 Feb 04 09:43:05 crc kubenswrapper[4644]: E0204 09:43:05.673423 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:43:06 crc kubenswrapper[4644]: I0204 09:43:06.041077 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" exitCode=0 Feb 04 09:43:06 crc kubenswrapper[4644]: I0204 09:43:06.041122 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66"} Feb 04 09:43:06 crc kubenswrapper[4644]: I0204 09:43:06.041196 4644 scope.go:117] "RemoveContainer" containerID="6ba97958315d4bce79b08a62655a3a62ea8ea4f1dd3db6eb25be55e5206d4ab2" Feb 04 09:43:06 crc kubenswrapper[4644]: I0204 09:43:06.041809 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:43:06 crc kubenswrapper[4644]: E0204 09:43:06.042038 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:43:07 crc kubenswrapper[4644]: I0204 09:43:07.522269 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:43:07 crc kubenswrapper[4644]: I0204 09:43:07.599236 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:43:07 crc kubenswrapper[4644]: I0204 09:43:07.763530 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.063470 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hqqb" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="registry-server" containerID="cri-o://c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1" gracePeriod=2 Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.602173 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.757238 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content\") pod \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.757426 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9bpd\" (UniqueName: \"kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd\") pod \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.757460 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities\") pod \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\" (UID: \"852784ec-3433-494b-9c40-e9fa8d6ff7ad\") " Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.758244 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities" (OuterVolumeSpecName: "utilities") pod "852784ec-3433-494b-9c40-e9fa8d6ff7ad" (UID: "852784ec-3433-494b-9c40-e9fa8d6ff7ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.768358 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd" (OuterVolumeSpecName: "kube-api-access-j9bpd") pod "852784ec-3433-494b-9c40-e9fa8d6ff7ad" (UID: "852784ec-3433-494b-9c40-e9fa8d6ff7ad"). InnerVolumeSpecName "kube-api-access-j9bpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.821164 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "852784ec-3433-494b-9c40-e9fa8d6ff7ad" (UID: "852784ec-3433-494b-9c40-e9fa8d6ff7ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.859753 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9bpd\" (UniqueName: \"kubernetes.io/projected/852784ec-3433-494b-9c40-e9fa8d6ff7ad-kube-api-access-j9bpd\") on node \"crc\" DevicePath \"\"" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.859794 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:43:09 crc kubenswrapper[4644]: I0204 09:43:09.859803 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852784ec-3433-494b-9c40-e9fa8d6ff7ad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.076391 4644 generic.go:334] "Generic (PLEG): container finished" podID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerID="c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1" exitCode=0 Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.076444 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerDied","Data":"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1"} Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.076477 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hqqb" event={"ID":"852784ec-3433-494b-9c40-e9fa8d6ff7ad","Type":"ContainerDied","Data":"3008daedcd3ec50f91409d32dee01e976126aeef3fa4f7efde92d1fabc20c1d0"} Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.076498 4644 scope.go:117] "RemoveContainer" containerID="c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.076682 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hqqb" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.112447 4644 scope.go:117] "RemoveContainer" containerID="07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.128882 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.139794 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hqqb"] Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.146904 4644 scope.go:117] "RemoveContainer" containerID="6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.204692 4644 scope.go:117] "RemoveContainer" containerID="c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1" Feb 04 09:43:10 crc kubenswrapper[4644]: E0204 09:43:10.205368 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1\": container with ID starting with c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1 not found: ID does not exist" containerID="c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.205419 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1"} err="failed to get container status \"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1\": rpc error: code = NotFound desc = could not find container \"c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1\": container with ID starting with c36f21a65aab9f0c04b929734ad547dafcc0a54835800993a7cd12376a96f2e1 not found: ID does not exist" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.205449 4644 scope.go:117] "RemoveContainer" containerID="07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d" Feb 04 09:43:10 crc kubenswrapper[4644]: E0204 09:43:10.205941 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d\": container with ID starting with 07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d not found: ID does not exist" containerID="07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.205982 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d"} err="failed to get container status \"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d\": rpc error: code = NotFound desc = could not find container \"07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d\": container with ID starting with 07f1f37905f72a7287c589fe9c69fd95b8da0e114b1d2d129519d1df27edaf3d not found: ID does not exist" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.206013 4644 scope.go:117] "RemoveContainer" containerID="6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822" Feb 04 09:43:10 crc kubenswrapper[4644]: E0204 09:43:10.206403 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822\": container with ID starting with 6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822 not found: ID does not exist" containerID="6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.206437 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822"} err="failed to get container status \"6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822\": rpc error: code = NotFound desc = could not find container \"6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822\": container with ID starting with 6a5d8b64deff35629a5855a58b3efda61388719179747756040c241058d6d822 not found: ID does not exist" Feb 04 09:43:10 crc kubenswrapper[4644]: I0204 09:43:10.671651 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" path="/var/lib/kubelet/pods/852784ec-3433-494b-9c40-e9fa8d6ff7ad/volumes" Feb 04 09:43:19 crc kubenswrapper[4644]: I0204 09:43:19.661350 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:43:19 crc kubenswrapper[4644]: E0204 09:43:19.662532 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:43:33 crc kubenswrapper[4644]: I0204 09:43:33.659384 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:43:33 crc kubenswrapper[4644]: E0204 09:43:33.660100 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:43:46 crc kubenswrapper[4644]: I0204 09:43:46.660519 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:43:46 crc kubenswrapper[4644]: E0204 09:43:46.661323 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:43:57 crc kubenswrapper[4644]: I0204 09:43:57.659871 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:43:57 crc kubenswrapper[4644]: E0204 09:43:57.660667 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:44:11 crc kubenswrapper[4644]: I0204 09:44:11.660478 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:44:11 crc kubenswrapper[4644]: E0204 09:44:11.661293 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:44:22 crc kubenswrapper[4644]: I0204 09:44:22.660567 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:44:22 crc kubenswrapper[4644]: E0204 09:44:22.661189 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:44:35 crc kubenswrapper[4644]: I0204 09:44:35.661442 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:44:35 crc kubenswrapper[4644]: E0204 09:44:35.662585 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:44:47 crc kubenswrapper[4644]: I0204 09:44:47.661268 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:44:47 crc kubenswrapper[4644]: E0204 09:44:47.662402 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.183619 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m"] Feb 04 09:45:00 crc kubenswrapper[4644]: E0204 09:45:00.184615 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="registry-server" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.184636 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="registry-server" Feb 04 09:45:00 crc kubenswrapper[4644]: E0204 09:45:00.184664 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="extract-utilities" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.184672 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="extract-utilities" Feb 04 09:45:00 crc kubenswrapper[4644]: E0204 09:45:00.184708 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="extract-content" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.184715 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="extract-content" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.184946 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="852784ec-3433-494b-9c40-e9fa8d6ff7ad" containerName="registry-server" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.185698 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.193516 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.193566 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.205753 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m"] Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.268490 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qvl2\" (UniqueName: \"kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.268548 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.268653 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.370542 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.370671 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qvl2\" (UniqueName: \"kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.370699 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.371414 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.376080 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.393452 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qvl2\" (UniqueName: \"kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2\") pod \"collect-profiles-29503305-rdx6m\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:00 crc kubenswrapper[4644]: I0204 09:45:00.512870 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:01 crc kubenswrapper[4644]: I0204 09:45:01.024128 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m"] Feb 04 09:45:01 crc kubenswrapper[4644]: I0204 09:45:01.079932 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" event={"ID":"098a6003-0777-4c9a-961c-47f807849f0a","Type":"ContainerStarted","Data":"58d0cb43bdc3c339a3e43bb63dd0e113b49cffff9afa058e8047147957f772f6"} Feb 04 09:45:02 crc kubenswrapper[4644]: I0204 09:45:02.090614 4644 generic.go:334] "Generic (PLEG): container finished" podID="098a6003-0777-4c9a-961c-47f807849f0a" containerID="26220a6779b4f55c24a1ad21d4a067213ca15ccc50d4ed1704df23b46b34e622" exitCode=0 Feb 04 09:45:02 crc kubenswrapper[4644]: I0204 09:45:02.090688 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" event={"ID":"098a6003-0777-4c9a-961c-47f807849f0a","Type":"ContainerDied","Data":"26220a6779b4f55c24a1ad21d4a067213ca15ccc50d4ed1704df23b46b34e622"} Feb 04 09:45:03 crc kubenswrapper[4644]: I0204 09:45:03.345975 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:45:03 crc kubenswrapper[4644]: E0204 09:45:03.346637 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:45:03 crc kubenswrapper[4644]: I0204 09:45:03.946541 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.057896 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qvl2\" (UniqueName: \"kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2\") pod \"098a6003-0777-4c9a-961c-47f807849f0a\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.058043 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume\") pod \"098a6003-0777-4c9a-961c-47f807849f0a\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.058133 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume\") pod \"098a6003-0777-4c9a-961c-47f807849f0a\" (UID: \"098a6003-0777-4c9a-961c-47f807849f0a\") " Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.058833 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "098a6003-0777-4c9a-961c-47f807849f0a" (UID: "098a6003-0777-4c9a-961c-47f807849f0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.066022 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2" (OuterVolumeSpecName: "kube-api-access-5qvl2") pod "098a6003-0777-4c9a-961c-47f807849f0a" (UID: "098a6003-0777-4c9a-961c-47f807849f0a"). InnerVolumeSpecName "kube-api-access-5qvl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.073547 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "098a6003-0777-4c9a-961c-47f807849f0a" (UID: "098a6003-0777-4c9a-961c-47f807849f0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.160207 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098a6003-0777-4c9a-961c-47f807849f0a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.160256 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/098a6003-0777-4c9a-961c-47f807849f0a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.160266 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qvl2\" (UniqueName: \"kubernetes.io/projected/098a6003-0777-4c9a-961c-47f807849f0a-kube-api-access-5qvl2\") on node \"crc\" DevicePath \"\"" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.383565 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" event={"ID":"098a6003-0777-4c9a-961c-47f807849f0a","Type":"ContainerDied","Data":"58d0cb43bdc3c339a3e43bb63dd0e113b49cffff9afa058e8047147957f772f6"} Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.383600 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d0cb43bdc3c339a3e43bb63dd0e113b49cffff9afa058e8047147957f772f6" Feb 04 09:45:04 crc kubenswrapper[4644]: I0204 09:45:04.384869 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503305-rdx6m" Feb 04 09:45:05 crc kubenswrapper[4644]: I0204 09:45:05.026758 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6"] Feb 04 09:45:05 crc kubenswrapper[4644]: I0204 09:45:05.035092 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503260-wpjf6"] Feb 04 09:45:06 crc kubenswrapper[4644]: I0204 09:45:06.689688 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774fde1c-3407-4779-8e1e-e884b86ea91e" path="/var/lib/kubelet/pods/774fde1c-3407-4779-8e1e-e884b86ea91e/volumes" Feb 04 09:45:16 crc kubenswrapper[4644]: I0204 09:45:16.659866 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:45:16 crc kubenswrapper[4644]: E0204 09:45:16.660835 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:45:31 crc kubenswrapper[4644]: I0204 09:45:31.659441 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:45:31 crc kubenswrapper[4644]: E0204 09:45:31.660164 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:45:44 crc kubenswrapper[4644]: I0204 09:45:44.659317 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:45:44 crc kubenswrapper[4644]: E0204 09:45:44.660080 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:45:55 crc kubenswrapper[4644]: I0204 09:45:55.661224 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:45:55 crc kubenswrapper[4644]: E0204 09:45:55.662090 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:46:03 crc kubenswrapper[4644]: I0204 09:46:03.439193 4644 scope.go:117] "RemoveContainer" containerID="0995d52beecf8d50e6a354b08deb4243956220224842c108f1648105e4b8e5a5" Feb 04 09:46:10 crc kubenswrapper[4644]: I0204 09:46:10.668575 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:46:10 crc kubenswrapper[4644]: E0204 09:46:10.669256 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.265356 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:46:23 crc kubenswrapper[4644]: E0204 09:46:23.266154 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098a6003-0777-4c9a-961c-47f807849f0a" containerName="collect-profiles" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.266167 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="098a6003-0777-4c9a-961c-47f807849f0a" containerName="collect-profiles" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.266419 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="098a6003-0777-4c9a-961c-47f807849f0a" containerName="collect-profiles" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.267793 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.280283 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.414595 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.414998 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf85l\" (UniqueName: \"kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.415055 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.516426 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.516762 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf85l\" (UniqueName: \"kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.516867 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.516942 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.517292 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.546338 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf85l\" (UniqueName: \"kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l\") pod \"redhat-operators-zg5q4\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.598726 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:23 crc kubenswrapper[4644]: I0204 09:46:23.661230 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:46:23 crc kubenswrapper[4644]: E0204 09:46:23.661473 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:46:24 crc kubenswrapper[4644]: I0204 09:46:24.104017 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:46:24 crc kubenswrapper[4644]: I0204 09:46:24.247493 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerStarted","Data":"d57fab8df55388ed93bcb517b41755747a5d0c977b69866790bcd90d40571093"} Feb 04 09:46:25 crc kubenswrapper[4644]: I0204 09:46:25.265698 4644 generic.go:334] "Generic (PLEG): container finished" podID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerID="34b5d86899aa61a221ac26f31f9c6764ad3ba0caa7168f4da7273a8d5f7d67b7" exitCode=0 Feb 04 09:46:25 crc kubenswrapper[4644]: I0204 09:46:25.265760 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerDied","Data":"34b5d86899aa61a221ac26f31f9c6764ad3ba0caa7168f4da7273a8d5f7d67b7"} Feb 04 09:46:25 crc kubenswrapper[4644]: I0204 09:46:25.268895 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:46:26 crc kubenswrapper[4644]: I0204 09:46:26.290104 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerStarted","Data":"e09a6cf719faa7bc907de748b1d6297968136c1e9555e8369a26703fe34fbff9"} Feb 04 09:46:36 crc kubenswrapper[4644]: I0204 09:46:36.368976 4644 generic.go:334] "Generic (PLEG): container finished" podID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerID="e09a6cf719faa7bc907de748b1d6297968136c1e9555e8369a26703fe34fbff9" exitCode=0 Feb 04 09:46:36 crc kubenswrapper[4644]: I0204 09:46:36.369042 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerDied","Data":"e09a6cf719faa7bc907de748b1d6297968136c1e9555e8369a26703fe34fbff9"} Feb 04 09:46:36 crc kubenswrapper[4644]: I0204 09:46:36.663345 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:46:36 crc kubenswrapper[4644]: E0204 09:46:36.663573 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:46:37 crc kubenswrapper[4644]: I0204 09:46:37.378484 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerStarted","Data":"99e301098a0cdbc4982ca338be85a9deb9acc09044cabdd87bf7107d13d23904"} Feb 04 09:46:37 crc kubenswrapper[4644]: I0204 09:46:37.419636 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zg5q4" podStartSLOduration=2.727647344 podStartE2EDuration="14.419608877s" podCreationTimestamp="2026-02-04 09:46:23 +0000 UTC" firstStartedPulling="2026-02-04 09:46:25.268643996 +0000 UTC m=+3895.308701751" lastFinishedPulling="2026-02-04 09:46:36.960605529 +0000 UTC m=+3907.000663284" observedRunningTime="2026-02-04 09:46:37.406862871 +0000 UTC m=+3907.446920626" watchObservedRunningTime="2026-02-04 09:46:37.419608877 +0000 UTC m=+3907.459666632" Feb 04 09:46:43 crc kubenswrapper[4644]: I0204 09:46:43.598936 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:43 crc kubenswrapper[4644]: I0204 09:46:43.599465 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:46:44 crc kubenswrapper[4644]: I0204 09:46:44.647898 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:46:44 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:46:44 crc kubenswrapper[4644]: > Feb 04 09:46:49 crc kubenswrapper[4644]: I0204 09:46:49.659546 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:46:49 crc kubenswrapper[4644]: E0204 09:46:49.660285 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:46:54 crc kubenswrapper[4644]: I0204 09:46:54.649499 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:46:54 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:46:54 crc kubenswrapper[4644]: > Feb 04 09:47:00 crc kubenswrapper[4644]: I0204 09:47:00.668802 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:47:00 crc kubenswrapper[4644]: E0204 09:47:00.669739 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:47:04 crc kubenswrapper[4644]: I0204 09:47:04.652513 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:47:04 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:47:04 crc kubenswrapper[4644]: > Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.877285 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.880177 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.889609 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.962624 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.962896 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsbk\" (UniqueName: \"kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:07 crc kubenswrapper[4644]: I0204 09:47:07.963015 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.065353 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsbk\" (UniqueName: \"kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.065412 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.065509 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.066008 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.066102 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.094597 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsbk\" (UniqueName: \"kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk\") pod \"redhat-marketplace-97zns\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.217474 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:08 crc kubenswrapper[4644]: I0204 09:47:08.829924 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:09 crc kubenswrapper[4644]: I0204 09:47:09.668035 4644 generic.go:334] "Generic (PLEG): container finished" podID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerID="4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9" exitCode=0 Feb 04 09:47:09 crc kubenswrapper[4644]: I0204 09:47:09.668093 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerDied","Data":"4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9"} Feb 04 09:47:09 crc kubenswrapper[4644]: I0204 09:47:09.668281 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerStarted","Data":"24d1d63bd6fe21a1cc6164afa0e8ab478fede60a60987c2756c5768af31ee9d9"} Feb 04 09:47:10 crc kubenswrapper[4644]: I0204 09:47:10.677249 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerStarted","Data":"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28"} Feb 04 09:47:13 crc kubenswrapper[4644]: I0204 09:47:13.665916 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:47:13 crc kubenswrapper[4644]: E0204 09:47:13.666812 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:47:13 crc kubenswrapper[4644]: I0204 09:47:13.713681 4644 generic.go:334] "Generic (PLEG): container finished" podID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerID="922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28" exitCode=0 Feb 04 09:47:13 crc kubenswrapper[4644]: I0204 09:47:13.713744 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerDied","Data":"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28"} Feb 04 09:47:15 crc kubenswrapper[4644]: I0204 09:47:15.738523 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerStarted","Data":"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948"} Feb 04 09:47:15 crc kubenswrapper[4644]: I0204 09:47:15.789406 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:47:15 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:47:15 crc kubenswrapper[4644]: > Feb 04 09:47:18 crc kubenswrapper[4644]: I0204 09:47:18.217842 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:18 crc kubenswrapper[4644]: I0204 09:47:18.218237 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:19 crc kubenswrapper[4644]: I0204 09:47:19.278206 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-97zns" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:47:19 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:47:19 crc kubenswrapper[4644]: > Feb 04 09:47:24 crc kubenswrapper[4644]: I0204 09:47:24.673121 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:47:24 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:47:24 crc kubenswrapper[4644]: > Feb 04 09:47:28 crc kubenswrapper[4644]: I0204 09:47:28.280395 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:28 crc kubenswrapper[4644]: I0204 09:47:28.305462 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97zns" podStartSLOduration=16.72320005 podStartE2EDuration="21.305440987s" podCreationTimestamp="2026-02-04 09:47:07 +0000 UTC" firstStartedPulling="2026-02-04 09:47:09.672811124 +0000 UTC m=+3939.712868879" lastFinishedPulling="2026-02-04 09:47:14.255052061 +0000 UTC m=+3944.295109816" observedRunningTime="2026-02-04 09:47:15.768410582 +0000 UTC m=+3945.808468337" watchObservedRunningTime="2026-02-04 09:47:28.305440987 +0000 UTC m=+3958.345498752" Feb 04 09:47:28 crc kubenswrapper[4644]: I0204 09:47:28.340567 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:28 crc kubenswrapper[4644]: I0204 09:47:28.517706 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:28 crc kubenswrapper[4644]: I0204 09:47:28.660160 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:47:28 crc kubenswrapper[4644]: E0204 09:47:28.660683 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:47:29 crc kubenswrapper[4644]: I0204 09:47:29.860381 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-97zns" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="registry-server" containerID="cri-o://4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948" gracePeriod=2 Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.450657 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.553305 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content\") pod \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.553945 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities\") pod \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.554139 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsbk\" (UniqueName: \"kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk\") pod \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\" (UID: \"b8dee946-7d8b-428f-8ea9-890cebc1f6ec\") " Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.554544 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities" (OuterVolumeSpecName: "utilities") pod "b8dee946-7d8b-428f-8ea9-890cebc1f6ec" (UID: "b8dee946-7d8b-428f-8ea9-890cebc1f6ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.557108 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.573407 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk" (OuterVolumeSpecName: "kube-api-access-kwsbk") pod "b8dee946-7d8b-428f-8ea9-890cebc1f6ec" (UID: "b8dee946-7d8b-428f-8ea9-890cebc1f6ec"). InnerVolumeSpecName "kube-api-access-kwsbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.630421 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8dee946-7d8b-428f-8ea9-890cebc1f6ec" (UID: "b8dee946-7d8b-428f-8ea9-890cebc1f6ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.659097 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsbk\" (UniqueName: \"kubernetes.io/projected/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-kube-api-access-kwsbk\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.659135 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dee946-7d8b-428f-8ea9-890cebc1f6ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.871917 4644 generic.go:334] "Generic (PLEG): container finished" podID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerID="4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948" exitCode=0 Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.872087 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerDied","Data":"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948"} Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.872220 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97zns" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.872245 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97zns" event={"ID":"b8dee946-7d8b-428f-8ea9-890cebc1f6ec","Type":"ContainerDied","Data":"24d1d63bd6fe21a1cc6164afa0e8ab478fede60a60987c2756c5768af31ee9d9"} Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.872275 4644 scope.go:117] "RemoveContainer" containerID="4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.905648 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.914019 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-97zns"] Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.916111 4644 scope.go:117] "RemoveContainer" containerID="922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28" Feb 04 09:47:30 crc kubenswrapper[4644]: I0204 09:47:30.958364 4644 scope.go:117] "RemoveContainer" containerID="4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.003416 4644 scope.go:117] "RemoveContainer" containerID="4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948" Feb 04 09:47:31 crc kubenswrapper[4644]: E0204 09:47:31.003876 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948\": container with ID starting with 4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948 not found: ID does not exist" containerID="4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.003918 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948"} err="failed to get container status \"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948\": rpc error: code = NotFound desc = could not find container \"4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948\": container with ID starting with 4ffdc6f6f136864571812c58e22613dc4c6aa0c912e23fe87659d01b6b937948 not found: ID does not exist" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.003945 4644 scope.go:117] "RemoveContainer" containerID="922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28" Feb 04 09:47:31 crc kubenswrapper[4644]: E0204 09:47:31.004245 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28\": container with ID starting with 922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28 not found: ID does not exist" containerID="922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.004275 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28"} err="failed to get container status \"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28\": rpc error: code = NotFound desc = could not find container \"922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28\": container with ID starting with 922bd374dac6bda17ee35984dafcd7537fda908dcc006b914889b3ec865e0d28 not found: ID does not exist" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.004294 4644 scope.go:117] "RemoveContainer" containerID="4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9" Feb 04 09:47:31 crc kubenswrapper[4644]: E0204 09:47:31.004597 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9\": container with ID starting with 4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9 not found: ID does not exist" containerID="4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9" Feb 04 09:47:31 crc kubenswrapper[4644]: I0204 09:47:31.004621 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9"} err="failed to get container status \"4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9\": rpc error: code = NotFound desc = could not find container \"4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9\": container with ID starting with 4d0ef4f872eeae9e53db6bfc94217fbc551d184a4da176038c4dd86114faf4a9 not found: ID does not exist" Feb 04 09:47:32 crc kubenswrapper[4644]: I0204 09:47:32.673228 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" path="/var/lib/kubelet/pods/b8dee946-7d8b-428f-8ea9-890cebc1f6ec/volumes" Feb 04 09:47:34 crc kubenswrapper[4644]: I0204 09:47:34.866688 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" probeResult="failure" output=< Feb 04 09:47:34 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:47:34 crc kubenswrapper[4644]: > Feb 04 09:47:42 crc kubenswrapper[4644]: I0204 09:47:42.666119 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:47:42 crc kubenswrapper[4644]: E0204 09:47:42.666835 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:47:43 crc kubenswrapper[4644]: I0204 09:47:43.664105 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:47:43 crc kubenswrapper[4644]: I0204 09:47:43.842763 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:47:45 crc kubenswrapper[4644]: I0204 09:47:45.596845 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:47:45 crc kubenswrapper[4644]: I0204 09:47:45.597406 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zg5q4" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" containerID="cri-o://99e301098a0cdbc4982ca338be85a9deb9acc09044cabdd87bf7107d13d23904" gracePeriod=2 Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.015206 4644 generic.go:334] "Generic (PLEG): container finished" podID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerID="99e301098a0cdbc4982ca338be85a9deb9acc09044cabdd87bf7107d13d23904" exitCode=0 Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.015288 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerDied","Data":"99e301098a0cdbc4982ca338be85a9deb9acc09044cabdd87bf7107d13d23904"} Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.258246 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.381018 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content\") pod \"1cc036fc-6307-4816-8db7-70d9e9f80bec\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.381052 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities\") pod \"1cc036fc-6307-4816-8db7-70d9e9f80bec\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.381112 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf85l\" (UniqueName: \"kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l\") pod \"1cc036fc-6307-4816-8db7-70d9e9f80bec\" (UID: \"1cc036fc-6307-4816-8db7-70d9e9f80bec\") " Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.382278 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities" (OuterVolumeSpecName: "utilities") pod "1cc036fc-6307-4816-8db7-70d9e9f80bec" (UID: "1cc036fc-6307-4816-8db7-70d9e9f80bec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.387921 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l" (OuterVolumeSpecName: "kube-api-access-rf85l") pod "1cc036fc-6307-4816-8db7-70d9e9f80bec" (UID: "1cc036fc-6307-4816-8db7-70d9e9f80bec"). InnerVolumeSpecName "kube-api-access-rf85l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.483576 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.483604 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf85l\" (UniqueName: \"kubernetes.io/projected/1cc036fc-6307-4816-8db7-70d9e9f80bec-kube-api-access-rf85l\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.528956 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cc036fc-6307-4816-8db7-70d9e9f80bec" (UID: "1cc036fc-6307-4816-8db7-70d9e9f80bec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:47:46 crc kubenswrapper[4644]: I0204 09:47:46.585306 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc036fc-6307-4816-8db7-70d9e9f80bec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.032903 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg5q4" event={"ID":"1cc036fc-6307-4816-8db7-70d9e9f80bec","Type":"ContainerDied","Data":"d57fab8df55388ed93bcb517b41755747a5d0c977b69866790bcd90d40571093"} Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.033496 4644 scope.go:117] "RemoveContainer" containerID="99e301098a0cdbc4982ca338be85a9deb9acc09044cabdd87bf7107d13d23904" Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.033004 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg5q4" Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.075704 4644 scope.go:117] "RemoveContainer" containerID="e09a6cf719faa7bc907de748b1d6297968136c1e9555e8369a26703fe34fbff9" Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.078712 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.091026 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zg5q4"] Feb 04 09:47:47 crc kubenswrapper[4644]: I0204 09:47:47.111093 4644 scope.go:117] "RemoveContainer" containerID="34b5d86899aa61a221ac26f31f9c6764ad3ba0caa7168f4da7273a8d5f7d67b7" Feb 04 09:47:48 crc kubenswrapper[4644]: I0204 09:47:48.671080 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" path="/var/lib/kubelet/pods/1cc036fc-6307-4816-8db7-70d9e9f80bec/volumes" Feb 04 09:47:57 crc kubenswrapper[4644]: I0204 09:47:57.660356 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:47:57 crc kubenswrapper[4644]: E0204 09:47:57.661205 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:48:12 crc kubenswrapper[4644]: I0204 09:48:12.659602 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:48:13 crc kubenswrapper[4644]: I0204 09:48:13.281626 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0"} Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.978064 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979126 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="extract-utilities" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979142 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="extract-utilities" Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979171 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="extract-content" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979178 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="extract-content" Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979198 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="extract-utilities" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979206 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="extract-utilities" Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979228 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="extract-content" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979235 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="extract-content" Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979242 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979247 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: E0204 09:48:30.979259 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979264 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979868 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc036fc-6307-4816-8db7-70d9e9f80bec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.979898 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8dee946-7d8b-428f-8ea9-890cebc1f6ec" containerName="registry-server" Feb 04 09:48:30 crc kubenswrapper[4644]: I0204 09:48:30.981237 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.005346 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.131434 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.131620 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9pxb\" (UniqueName: \"kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.131788 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.234316 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.235019 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.235097 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.235415 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9pxb\" (UniqueName: \"kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.235758 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.265348 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9pxb\" (UniqueName: \"kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb\") pod \"certified-operators-dcq6f\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.299675 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:31 crc kubenswrapper[4644]: I0204 09:48:31.914517 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:32 crc kubenswrapper[4644]: I0204 09:48:32.466351 4644 generic.go:334] "Generic (PLEG): container finished" podID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerID="e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3" exitCode=0 Feb 04 09:48:32 crc kubenswrapper[4644]: I0204 09:48:32.466404 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerDied","Data":"e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3"} Feb 04 09:48:32 crc kubenswrapper[4644]: I0204 09:48:32.466434 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerStarted","Data":"e19ca03578dd83f77b9c87141f90fc545aaff8056706301335dcff004c1b02d3"} Feb 04 09:48:35 crc kubenswrapper[4644]: I0204 09:48:35.513308 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerStarted","Data":"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226"} Feb 04 09:48:37 crc kubenswrapper[4644]: I0204 09:48:37.535616 4644 generic.go:334] "Generic (PLEG): container finished" podID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerID="4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226" exitCode=0 Feb 04 09:48:37 crc kubenswrapper[4644]: I0204 09:48:37.535944 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerDied","Data":"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226"} Feb 04 09:48:38 crc kubenswrapper[4644]: I0204 09:48:38.545729 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerStarted","Data":"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6"} Feb 04 09:48:38 crc kubenswrapper[4644]: I0204 09:48:38.568452 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dcq6f" podStartSLOduration=2.840287205 podStartE2EDuration="8.568435015s" podCreationTimestamp="2026-02-04 09:48:30 +0000 UTC" firstStartedPulling="2026-02-04 09:48:32.468961904 +0000 UTC m=+4022.509019659" lastFinishedPulling="2026-02-04 09:48:38.197109714 +0000 UTC m=+4028.237167469" observedRunningTime="2026-02-04 09:48:38.564372685 +0000 UTC m=+4028.604430440" watchObservedRunningTime="2026-02-04 09:48:38.568435015 +0000 UTC m=+4028.608492770" Feb 04 09:48:41 crc kubenswrapper[4644]: I0204 09:48:41.300637 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:41 crc kubenswrapper[4644]: I0204 09:48:41.301466 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:41 crc kubenswrapper[4644]: I0204 09:48:41.367933 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:51 crc kubenswrapper[4644]: I0204 09:48:51.354135 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:51 crc kubenswrapper[4644]: I0204 09:48:51.413361 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:51 crc kubenswrapper[4644]: I0204 09:48:51.674943 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dcq6f" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="registry-server" containerID="cri-o://2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6" gracePeriod=2 Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.232984 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.290703 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities\") pod \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.290850 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content\") pod \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.290997 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9pxb\" (UniqueName: \"kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb\") pod \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\" (UID: \"017e9b11-d88d-48a7-8bf6-f8752a8ef084\") " Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.291753 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities" (OuterVolumeSpecName: "utilities") pod "017e9b11-d88d-48a7-8bf6-f8752a8ef084" (UID: "017e9b11-d88d-48a7-8bf6-f8752a8ef084"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.297859 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb" (OuterVolumeSpecName: "kube-api-access-s9pxb") pod "017e9b11-d88d-48a7-8bf6-f8752a8ef084" (UID: "017e9b11-d88d-48a7-8bf6-f8752a8ef084"). InnerVolumeSpecName "kube-api-access-s9pxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.357428 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "017e9b11-d88d-48a7-8bf6-f8752a8ef084" (UID: "017e9b11-d88d-48a7-8bf6-f8752a8ef084"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.393614 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.393648 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017e9b11-d88d-48a7-8bf6-f8752a8ef084-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.393661 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9pxb\" (UniqueName: \"kubernetes.io/projected/017e9b11-d88d-48a7-8bf6-f8752a8ef084-kube-api-access-s9pxb\") on node \"crc\" DevicePath \"\"" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.683471 4644 generic.go:334] "Generic (PLEG): container finished" podID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerID="2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6" exitCode=0 Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.683548 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcq6f" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.683567 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerDied","Data":"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6"} Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.683864 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcq6f" event={"ID":"017e9b11-d88d-48a7-8bf6-f8752a8ef084","Type":"ContainerDied","Data":"e19ca03578dd83f77b9c87141f90fc545aaff8056706301335dcff004c1b02d3"} Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.683883 4644 scope.go:117] "RemoveContainer" containerID="2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.710043 4644 scope.go:117] "RemoveContainer" containerID="4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.714646 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.734170 4644 scope.go:117] "RemoveContainer" containerID="e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.750107 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dcq6f"] Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.780097 4644 scope.go:117] "RemoveContainer" containerID="2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6" Feb 04 09:48:52 crc kubenswrapper[4644]: E0204 09:48:52.781679 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6\": container with ID starting with 2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6 not found: ID does not exist" containerID="2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.781722 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6"} err="failed to get container status \"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6\": rpc error: code = NotFound desc = could not find container \"2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6\": container with ID starting with 2f6957cc602004dae5a8baa4f7e5a6212d5b8cf9c9a1cdd7821d1a48de0d33c6 not found: ID does not exist" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.781756 4644 scope.go:117] "RemoveContainer" containerID="4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226" Feb 04 09:48:52 crc kubenswrapper[4644]: E0204 09:48:52.782159 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226\": container with ID starting with 4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226 not found: ID does not exist" containerID="4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.782200 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226"} err="failed to get container status \"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226\": rpc error: code = NotFound desc = could not find container \"4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226\": container with ID starting with 4fc0a23dab1285ef9f07c9e27e685e81d565ebd4eee4121278c21969ed372226 not found: ID does not exist" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.782227 4644 scope.go:117] "RemoveContainer" containerID="e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3" Feb 04 09:48:52 crc kubenswrapper[4644]: E0204 09:48:52.782595 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3\": container with ID starting with e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3 not found: ID does not exist" containerID="e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3" Feb 04 09:48:52 crc kubenswrapper[4644]: I0204 09:48:52.782623 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3"} err="failed to get container status \"e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3\": rpc error: code = NotFound desc = could not find container \"e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3\": container with ID starting with e55c6b04c6239b2544beac48baf3ba017c946d8e331a37da433cc64ad1b4c4d3 not found: ID does not exist" Feb 04 09:48:54 crc kubenswrapper[4644]: I0204 09:48:54.673172 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" path="/var/lib/kubelet/pods/017e9b11-d88d-48a7-8bf6-f8752a8ef084/volumes" Feb 04 09:49:04 crc kubenswrapper[4644]: I0204 09:49:04.788279 4644 generic.go:334] "Generic (PLEG): container finished" podID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" containerID="101cf3bc772c5640b819a2c2fc997c9d8f98dd037fdd66f52f914b84315540fb" exitCode=0 Feb 04 09:49:04 crc kubenswrapper[4644]: I0204 09:49:04.788398 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a","Type":"ContainerDied","Data":"101cf3bc772c5640b819a2c2fc997c9d8f98dd037fdd66f52f914b84315540fb"} Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.168539 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285135 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285202 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285307 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285346 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285372 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285671 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285865 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.285960 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdgv\" (UniqueName: \"kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.286002 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret\") pod \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\" (UID: \"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a\") " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.287986 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.288385 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data" (OuterVolumeSpecName: "config-data") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.292886 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv" (OuterVolumeSpecName: "kube-api-access-zqdgv") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "kube-api-access-zqdgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.299890 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.299892 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.317146 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.317969 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.319606 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.347440 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" (UID: "60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.390972 4644 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391138 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdgv\" (UniqueName: \"kubernetes.io/projected/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-kube-api-access-zqdgv\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391154 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391163 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391173 4644 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391206 4644 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391217 4644 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.391963 4644 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.392002 4644 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.418559 4644 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.493548 4644 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.810376 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a","Type":"ContainerDied","Data":"2640e4985a32ec4e9e548e86058ca96c0b5a6186a4e2b56a0d39b586e1804313"} Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.810428 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2640e4985a32ec4e9e548e86058ca96c0b5a6186a4e2b56a0d39b586e1804313" Feb 04 09:49:06 crc kubenswrapper[4644]: I0204 09:49:06.810513 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.529673 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 04 09:49:13 crc kubenswrapper[4644]: E0204 09:49:13.530798 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="extract-content" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.530815 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="extract-content" Feb 04 09:49:13 crc kubenswrapper[4644]: E0204 09:49:13.530841 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" containerName="tempest-tests-tempest-tests-runner" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.530848 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" containerName="tempest-tests-tempest-tests-runner" Feb 04 09:49:13 crc kubenswrapper[4644]: E0204 09:49:13.530855 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="registry-server" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.530860 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="registry-server" Feb 04 09:49:13 crc kubenswrapper[4644]: E0204 09:49:13.530871 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="extract-utilities" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.530877 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="extract-utilities" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.531076 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a" containerName="tempest-tests-tempest-tests-runner" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.531099 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="017e9b11-d88d-48a7-8bf6-f8752a8ef084" containerName="registry-server" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.531857 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.534314 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-spf2h" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.537645 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.730392 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkft\" (UniqueName: \"kubernetes.io/projected/2b70d9a5-0c99-4bca-b9e9-8212e140403a-kube-api-access-dqkft\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.730856 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.833134 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkft\" (UniqueName: \"kubernetes.io/projected/2b70d9a5-0c99-4bca-b9e9-8212e140403a-kube-api-access-dqkft\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.833234 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.835357 4644 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.863253 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkft\" (UniqueName: \"kubernetes.io/projected/2b70d9a5-0c99-4bca-b9e9-8212e140403a-kube-api-access-dqkft\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.869719 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2b70d9a5-0c99-4bca-b9e9-8212e140403a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:13 crc kubenswrapper[4644]: I0204 09:49:13.892910 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 04 09:49:14 crc kubenswrapper[4644]: I0204 09:49:14.370615 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 04 09:49:14 crc kubenswrapper[4644]: I0204 09:49:14.887655 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2b70d9a5-0c99-4bca-b9e9-8212e140403a","Type":"ContainerStarted","Data":"cbfe39147650e265cafd0c5173e78a4b34f69389feaa6ca365885d1a629dda87"} Feb 04 09:49:16 crc kubenswrapper[4644]: I0204 09:49:16.918503 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2b70d9a5-0c99-4bca-b9e9-8212e140403a","Type":"ContainerStarted","Data":"564def79cd5f5a05bfdec7ab0a8b03ad5bb5996393fefef4bbad805f46f44ae2"} Feb 04 09:49:16 crc kubenswrapper[4644]: I0204 09:49:16.940816 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.493219488 podStartE2EDuration="3.94079016s" podCreationTimestamp="2026-02-04 09:49:13 +0000 UTC" firstStartedPulling="2026-02-04 09:49:14.385939144 +0000 UTC m=+4064.425996899" lastFinishedPulling="2026-02-04 09:49:15.833509816 +0000 UTC m=+4065.873567571" observedRunningTime="2026-02-04 09:49:16.933725638 +0000 UTC m=+4066.973783393" watchObservedRunningTime="2026-02-04 09:49:16.94079016 +0000 UTC m=+4066.980847915" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.177951 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xc6qn/must-gather-9v5v9"] Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.181995 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.183916 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xc6qn"/"openshift-service-ca.crt" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.184470 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xc6qn"/"default-dockercfg-4bbnd" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.185556 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xc6qn"/"kube-root-ca.crt" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.198988 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xc6qn/must-gather-9v5v9"] Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.320191 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdg9\" (UniqueName: \"kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.320350 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.422616 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcdg9\" (UniqueName: \"kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.422752 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.423397 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.448308 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcdg9\" (UniqueName: \"kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9\") pod \"must-gather-9v5v9\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.503445 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:49:37 crc kubenswrapper[4644]: I0204 09:49:37.977290 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xc6qn/must-gather-9v5v9"] Feb 04 09:49:38 crc kubenswrapper[4644]: I0204 09:49:38.095893 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" event={"ID":"812fc2d9-21b8-400c-aa00-f40d4b2bc39a","Type":"ContainerStarted","Data":"199e6f16ef94b3a904b5012573dbe78f538d94e030f75292a3289f1d257ee9ab"} Feb 04 09:49:44 crc kubenswrapper[4644]: I0204 09:49:44.167124 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" event={"ID":"812fc2d9-21b8-400c-aa00-f40d4b2bc39a","Type":"ContainerStarted","Data":"0a74981b3a6b72d64cd30f8741164bbac86a29f80a6dd02ac2ed8796067d384a"} Feb 04 09:49:44 crc kubenswrapper[4644]: I0204 09:49:44.167638 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" event={"ID":"812fc2d9-21b8-400c-aa00-f40d4b2bc39a","Type":"ContainerStarted","Data":"e3c87ec462c813faad872c6fb4b126b4a8bb72dae01cb986f8d4c10217a015be"} Feb 04 09:49:44 crc kubenswrapper[4644]: I0204 09:49:44.184550 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" podStartSLOduration=2.051185355 podStartE2EDuration="7.18453349s" podCreationTimestamp="2026-02-04 09:49:37 +0000 UTC" firstStartedPulling="2026-02-04 09:49:37.98319042 +0000 UTC m=+4088.023248175" lastFinishedPulling="2026-02-04 09:49:43.116538555 +0000 UTC m=+4093.156596310" observedRunningTime="2026-02-04 09:49:44.180875071 +0000 UTC m=+4094.220932826" watchObservedRunningTime="2026-02-04 09:49:44.18453349 +0000 UTC m=+4094.224591245" Feb 04 09:49:50 crc kubenswrapper[4644]: I0204 09:49:50.978138 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-5f4hl"] Feb 04 09:49:50 crc kubenswrapper[4644]: I0204 09:49:50.979996 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.119653 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.119976 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zcq\" (UniqueName: \"kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.222310 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.222388 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zcq\" (UniqueName: \"kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.227502 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.245312 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zcq\" (UniqueName: \"kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq\") pod \"crc-debug-5f4hl\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:51 crc kubenswrapper[4644]: I0204 09:49:51.300495 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:49:52 crc kubenswrapper[4644]: I0204 09:49:52.237621 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" event={"ID":"d0adfd1e-0bec-49c4-a289-cf5cc2309730","Type":"ContainerStarted","Data":"ccdaf2e00148a0c3ef36f32a5792703ad4af242402ebc059c60a95a1186bbefe"} Feb 04 09:50:05 crc kubenswrapper[4644]: I0204 09:50:05.358864 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" event={"ID":"d0adfd1e-0bec-49c4-a289-cf5cc2309730","Type":"ContainerStarted","Data":"1e830f7396a20858d6ff09a78767461b3e929e9d9ecad29dcf797ea9e476aec5"} Feb 04 09:50:05 crc kubenswrapper[4644]: I0204 09:50:05.415745 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" podStartSLOduration=2.183588819 podStartE2EDuration="15.415726782s" podCreationTimestamp="2026-02-04 09:49:50 +0000 UTC" firstStartedPulling="2026-02-04 09:49:51.372205417 +0000 UTC m=+4101.412263172" lastFinishedPulling="2026-02-04 09:50:04.60434338 +0000 UTC m=+4114.644401135" observedRunningTime="2026-02-04 09:50:05.379526318 +0000 UTC m=+4115.419584083" watchObservedRunningTime="2026-02-04 09:50:05.415726782 +0000 UTC m=+4115.455784537" Feb 04 09:50:35 crc kubenswrapper[4644]: I0204 09:50:35.555271 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:50:35 crc kubenswrapper[4644]: I0204 09:50:35.555891 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:51:05 crc kubenswrapper[4644]: I0204 09:51:05.555180 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:51:05 crc kubenswrapper[4644]: I0204 09:51:05.555662 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:51:08 crc kubenswrapper[4644]: I0204 09:51:08.948728 4644 generic.go:334] "Generic (PLEG): container finished" podID="d0adfd1e-0bec-49c4-a289-cf5cc2309730" containerID="1e830f7396a20858d6ff09a78767461b3e929e9d9ecad29dcf797ea9e476aec5" exitCode=0 Feb 04 09:51:08 crc kubenswrapper[4644]: I0204 09:51:08.949187 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" event={"ID":"d0adfd1e-0bec-49c4-a289-cf5cc2309730","Type":"ContainerDied","Data":"1e830f7396a20858d6ff09a78767461b3e929e9d9ecad29dcf797ea9e476aec5"} Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.077397 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.118950 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-5f4hl"] Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.130555 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-5f4hl"] Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.239447 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zcq\" (UniqueName: \"kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq\") pod \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.239539 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host\") pod \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\" (UID: \"d0adfd1e-0bec-49c4-a289-cf5cc2309730\") " Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.239707 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host" (OuterVolumeSpecName: "host") pod "d0adfd1e-0bec-49c4-a289-cf5cc2309730" (UID: "d0adfd1e-0bec-49c4-a289-cf5cc2309730"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.240175 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0adfd1e-0bec-49c4-a289-cf5cc2309730-host\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.245572 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq" (OuterVolumeSpecName: "kube-api-access-z9zcq") pod "d0adfd1e-0bec-49c4-a289-cf5cc2309730" (UID: "d0adfd1e-0bec-49c4-a289-cf5cc2309730"). InnerVolumeSpecName "kube-api-access-z9zcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.342176 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zcq\" (UniqueName: \"kubernetes.io/projected/d0adfd1e-0bec-49c4-a289-cf5cc2309730-kube-api-access-z9zcq\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.671085 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0adfd1e-0bec-49c4-a289-cf5cc2309730" path="/var/lib/kubelet/pods/d0adfd1e-0bec-49c4-a289-cf5cc2309730/volumes" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.967610 4644 scope.go:117] "RemoveContainer" containerID="1e830f7396a20858d6ff09a78767461b3e929e9d9ecad29dcf797ea9e476aec5" Feb 04 09:51:10 crc kubenswrapper[4644]: I0204 09:51:10.967661 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-5f4hl" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.305843 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-49nnk"] Feb 04 09:51:11 crc kubenswrapper[4644]: E0204 09:51:11.307738 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0adfd1e-0bec-49c4-a289-cf5cc2309730" containerName="container-00" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.307836 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0adfd1e-0bec-49c4-a289-cf5cc2309730" containerName="container-00" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.308101 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0adfd1e-0bec-49c4-a289-cf5cc2309730" containerName="container-00" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.308873 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.469078 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.469306 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ltm\" (UniqueName: \"kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.571692 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.571878 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.571917 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5ltm\" (UniqueName: \"kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.604869 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5ltm\" (UniqueName: \"kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm\") pod \"crc-debug-49nnk\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.630733 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.981659 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" event={"ID":"475ab98f-6c22-4d22-b054-a5bdb943cb70","Type":"ContainerStarted","Data":"37da431a9732cb41a1109171e804b2f3d3904f056f89b53a85c514a8c9be31ea"} Feb 04 09:51:11 crc kubenswrapper[4644]: I0204 09:51:11.981982 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" event={"ID":"475ab98f-6c22-4d22-b054-a5bdb943cb70","Type":"ContainerStarted","Data":"124a05a1c12818e5481cca1332620ed748867fc3e66049af5a7387d6eacceaa5"} Feb 04 09:51:12 crc kubenswrapper[4644]: I0204 09:51:12.003501 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" podStartSLOduration=1.003482715 podStartE2EDuration="1.003482715s" podCreationTimestamp="2026-02-04 09:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 09:51:11.996601057 +0000 UTC m=+4182.036658822" watchObservedRunningTime="2026-02-04 09:51:12.003482715 +0000 UTC m=+4182.043540470" Feb 04 09:51:12 crc kubenswrapper[4644]: I0204 09:51:12.992843 4644 generic.go:334] "Generic (PLEG): container finished" podID="475ab98f-6c22-4d22-b054-a5bdb943cb70" containerID="37da431a9732cb41a1109171e804b2f3d3904f056f89b53a85c514a8c9be31ea" exitCode=0 Feb 04 09:51:12 crc kubenswrapper[4644]: I0204 09:51:12.993156 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" event={"ID":"475ab98f-6c22-4d22-b054-a5bdb943cb70","Type":"ContainerDied","Data":"37da431a9732cb41a1109171e804b2f3d3904f056f89b53a85c514a8c9be31ea"} Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.110003 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.214633 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host\") pod \"475ab98f-6c22-4d22-b054-a5bdb943cb70\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.214753 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5ltm\" (UniqueName: \"kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm\") pod \"475ab98f-6c22-4d22-b054-a5bdb943cb70\" (UID: \"475ab98f-6c22-4d22-b054-a5bdb943cb70\") " Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.214918 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host" (OuterVolumeSpecName: "host") pod "475ab98f-6c22-4d22-b054-a5bdb943cb70" (UID: "475ab98f-6c22-4d22-b054-a5bdb943cb70"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.215441 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/475ab98f-6c22-4d22-b054-a5bdb943cb70-host\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.237126 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm" (OuterVolumeSpecName: "kube-api-access-m5ltm") pod "475ab98f-6c22-4d22-b054-a5bdb943cb70" (UID: "475ab98f-6c22-4d22-b054-a5bdb943cb70"). InnerVolumeSpecName "kube-api-access-m5ltm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.316680 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5ltm\" (UniqueName: \"kubernetes.io/projected/475ab98f-6c22-4d22-b054-a5bdb943cb70-kube-api-access-m5ltm\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.643501 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-49nnk"] Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.652104 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-49nnk"] Feb 04 09:51:14 crc kubenswrapper[4644]: I0204 09:51:14.669564 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475ab98f-6c22-4d22-b054-a5bdb943cb70" path="/var/lib/kubelet/pods/475ab98f-6c22-4d22-b054-a5bdb943cb70/volumes" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.018155 4644 scope.go:117] "RemoveContainer" containerID="37da431a9732cb41a1109171e804b2f3d3904f056f89b53a85c514a8c9be31ea" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.018244 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-49nnk" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.817564 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-9twcx"] Feb 04 09:51:15 crc kubenswrapper[4644]: E0204 09:51:15.818350 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475ab98f-6c22-4d22-b054-a5bdb943cb70" containerName="container-00" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.818366 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="475ab98f-6c22-4d22-b054-a5bdb943cb70" containerName="container-00" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.818575 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="475ab98f-6c22-4d22-b054-a5bdb943cb70" containerName="container-00" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.819758 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.945774 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:15 crc kubenswrapper[4644]: I0204 09:51:15.945908 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkk7m\" (UniqueName: \"kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:16 crc kubenswrapper[4644]: I0204 09:51:16.047625 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkk7m\" (UniqueName: \"kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:16 crc kubenswrapper[4644]: I0204 09:51:16.047811 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:16 crc kubenswrapper[4644]: I0204 09:51:16.047986 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:16 crc kubenswrapper[4644]: I0204 09:51:16.069198 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkk7m\" (UniqueName: \"kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m\") pod \"crc-debug-9twcx\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:16 crc kubenswrapper[4644]: I0204 09:51:16.137131 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:17 crc kubenswrapper[4644]: I0204 09:51:17.039516 4644 generic.go:334] "Generic (PLEG): container finished" podID="609940be-f8ea-451d-af79-d4c04f154d5b" containerID="9106c0782fbc44c5a8d0079268b88664adcbeb7796b6e67b0e30891442b2df1b" exitCode=0 Feb 04 09:51:17 crc kubenswrapper[4644]: I0204 09:51:17.039620 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" event={"ID":"609940be-f8ea-451d-af79-d4c04f154d5b","Type":"ContainerDied","Data":"9106c0782fbc44c5a8d0079268b88664adcbeb7796b6e67b0e30891442b2df1b"} Feb 04 09:51:17 crc kubenswrapper[4644]: I0204 09:51:17.039877 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" event={"ID":"609940be-f8ea-451d-af79-d4c04f154d5b","Type":"ContainerStarted","Data":"5c3edb5d269cfd45b6b26cb51c63cd0ff4c0a274387eaf8dd668ed3ab55db274"} Feb 04 09:51:17 crc kubenswrapper[4644]: I0204 09:51:17.082228 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-9twcx"] Feb 04 09:51:17 crc kubenswrapper[4644]: I0204 09:51:17.093121 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xc6qn/crc-debug-9twcx"] Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.151482 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.287168 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host\") pod \"609940be-f8ea-451d-af79-d4c04f154d5b\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.287232 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkk7m\" (UniqueName: \"kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m\") pod \"609940be-f8ea-451d-af79-d4c04f154d5b\" (UID: \"609940be-f8ea-451d-af79-d4c04f154d5b\") " Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.287275 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host" (OuterVolumeSpecName: "host") pod "609940be-f8ea-451d-af79-d4c04f154d5b" (UID: "609940be-f8ea-451d-af79-d4c04f154d5b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.288433 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/609940be-f8ea-451d-af79-d4c04f154d5b-host\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.296204 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m" (OuterVolumeSpecName: "kube-api-access-vkk7m") pod "609940be-f8ea-451d-af79-d4c04f154d5b" (UID: "609940be-f8ea-451d-af79-d4c04f154d5b"). InnerVolumeSpecName "kube-api-access-vkk7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.400657 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkk7m\" (UniqueName: \"kubernetes.io/projected/609940be-f8ea-451d-af79-d4c04f154d5b-kube-api-access-vkk7m\") on node \"crc\" DevicePath \"\"" Feb 04 09:51:18 crc kubenswrapper[4644]: I0204 09:51:18.670600 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609940be-f8ea-451d-af79-d4c04f154d5b" path="/var/lib/kubelet/pods/609940be-f8ea-451d-af79-d4c04f154d5b/volumes" Feb 04 09:51:19 crc kubenswrapper[4644]: I0204 09:51:19.058816 4644 scope.go:117] "RemoveContainer" containerID="9106c0782fbc44c5a8d0079268b88664adcbeb7796b6e67b0e30891442b2df1b" Feb 04 09:51:19 crc kubenswrapper[4644]: I0204 09:51:19.058870 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/crc-debug-9twcx" Feb 04 09:51:35 crc kubenswrapper[4644]: I0204 09:51:35.554781 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:51:35 crc kubenswrapper[4644]: I0204 09:51:35.557208 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:51:35 crc kubenswrapper[4644]: I0204 09:51:35.557353 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:51:35 crc kubenswrapper[4644]: I0204 09:51:35.558290 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:51:35 crc kubenswrapper[4644]: I0204 09:51:35.558536 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0" gracePeriod=600 Feb 04 09:51:36 crc kubenswrapper[4644]: I0204 09:51:36.230405 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0" exitCode=0 Feb 04 09:51:36 crc kubenswrapper[4644]: I0204 09:51:36.230480 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0"} Feb 04 09:51:36 crc kubenswrapper[4644]: I0204 09:51:36.230778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39"} Feb 04 09:51:36 crc kubenswrapper[4644]: I0204 09:51:36.230817 4644 scope.go:117] "RemoveContainer" containerID="c7a79a53a152fb5c6d9d257162983741a9a2c556af4eeb6af48e7a1a6dfd8b66" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.233203 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-558bf4756b-n2g7b_9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7/barbican-api/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.430351 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-558bf4756b-n2g7b_9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7/barbican-api-log/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.460204 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-866db59d-m5kdr_f0a0f2d9-bd63-4dc5-826c-5d67f92a31da/barbican-keystone-listener/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.538339 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-866db59d-m5kdr_f0a0f2d9-bd63-4dc5-826c-5d67f92a31da/barbican-keystone-listener-log/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.691099 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-747c75f8c-ljgzl_be2eab6d-9a04-400b-baa9-c20fe5fcd269/barbican-worker/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.736669 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-747c75f8c-ljgzl_be2eab6d-9a04-400b-baa9-c20fe5fcd269/barbican-worker-log/0.log" Feb 04 09:51:40 crc kubenswrapper[4644]: I0204 09:51:40.961231 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4_308d165a-5458-4e82-936c-b7a25ebfcbe6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.031694 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/ceilometer-central-agent/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.084469 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/ceilometer-notification-agent/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.206058 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/sg-core/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.262910 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/proxy-httpd/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.450244 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4109caeb-65a7-4c6b-b09c-83da593a1ef2/cinder-api/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.521222 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4109caeb-65a7-4c6b-b09c-83da593a1ef2/cinder-api-log/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.602999 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e01886c2-fe24-4f65-9ace-d48998f27c65/cinder-scheduler/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.696874 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e01886c2-fe24-4f65-9ace-d48998f27c65/probe/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.870561 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw_54674fd4-5080-4cea-8cf9-7c6bbd9c53de/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:41 crc kubenswrapper[4644]: I0204 09:51:41.970039 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z_10140326-561b-48b8-8746-576a83f36c12/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.299570 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/init/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.453095 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/init/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.642561 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w_01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.673665 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/dnsmasq-dns/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.862580 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c737ef12-0ce6-47d8-9773-0244eff8200b/glance-log/0.log" Feb 04 09:51:42 crc kubenswrapper[4644]: I0204 09:51:42.922805 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c737ef12-0ce6-47d8-9773-0244eff8200b/glance-httpd/0.log" Feb 04 09:51:43 crc kubenswrapper[4644]: I0204 09:51:43.101810 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1fda6114-8d44-49ba-b30e-8ce9233f4b33/glance-httpd/0.log" Feb 04 09:51:43 crc kubenswrapper[4644]: I0204 09:51:43.128423 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1fda6114-8d44-49ba-b30e-8ce9233f4b33/glance-log/0.log" Feb 04 09:51:43 crc kubenswrapper[4644]: I0204 09:51:43.313269 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon/1.log" Feb 04 09:51:43 crc kubenswrapper[4644]: I0204 09:51:43.505307 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon/0.log" Feb 04 09:51:43 crc kubenswrapper[4644]: I0204 09:51:43.852129 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon-log/0.log" Feb 04 09:51:44 crc kubenswrapper[4644]: I0204 09:51:44.102494 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s75nr_11062f5d-3dd1-4087-9ea2-1b32fee5526c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:44 crc kubenswrapper[4644]: I0204 09:51:44.208812 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bzzbj_110ef1d0-ffbc-4356-9c1f-169889312eef/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:44 crc kubenswrapper[4644]: I0204 09:51:44.487704 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29503261-bsnnv_cc72f9f7-839f-402b-9576-e9daf7ed4d5b/keystone-cron/0.log" Feb 04 09:51:44 crc kubenswrapper[4644]: I0204 09:51:44.835767 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766cbd9f4b-bj8dc_1fa2a049-f943-48c9-b4c2-09c2cd5decc2/keystone-api/0.log" Feb 04 09:51:44 crc kubenswrapper[4644]: I0204 09:51:44.929853 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_26059c78-ccf4-418d-9012-40eb6cc5ba6f/kube-state-metrics/0.log" Feb 04 09:51:45 crc kubenswrapper[4644]: I0204 09:51:45.085624 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b44ph_5a074a3e-62ea-4cb2-96f3-ccce51518ad3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:45 crc kubenswrapper[4644]: I0204 09:51:45.784938 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cdfd666b9-jkzcm_e05bc597-36c9-492b-abb4-45edb814eed5/neutron-httpd/0.log" Feb 04 09:51:45 crc kubenswrapper[4644]: I0204 09:51:45.785361 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv_da0998d9-9cc2-4e46-ac4f-f47ec801a998/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:46 crc kubenswrapper[4644]: I0204 09:51:46.159396 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cdfd666b9-jkzcm_e05bc597-36c9-492b-abb4-45edb814eed5/neutron-api/0.log" Feb 04 09:51:46 crc kubenswrapper[4644]: I0204 09:51:46.665261 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_75651f7d-0816-4090-bcd8-0c20fd5660bd/nova-cell0-conductor-conductor/0.log" Feb 04 09:51:46 crc kubenswrapper[4644]: I0204 09:51:46.837360 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_36639dbd-0602-44cf-a535-51d69170e6c5/nova-cell1-conductor-conductor/0.log" Feb 04 09:51:47 crc kubenswrapper[4644]: I0204 09:51:47.271959 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3ed25922-57d7-4a67-828a-6a07c733ba91/nova-cell1-novncproxy-novncproxy/0.log" Feb 04 09:51:47 crc kubenswrapper[4644]: I0204 09:51:47.410405 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_96b76067-7c3f-44cb-8d2a-0bbb04035d9c/nova-api-log/0.log" Feb 04 09:51:47 crc kubenswrapper[4644]: I0204 09:51:47.686072 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_96b76067-7c3f-44cb-8d2a-0bbb04035d9c/nova-api-api/0.log" Feb 04 09:51:47 crc kubenswrapper[4644]: I0204 09:51:47.739906 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wbclq_7446c79e-b931-43ae-85a0-f21ab513e5e7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:47 crc kubenswrapper[4644]: I0204 09:51:47.788470 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bc0f95ed-7197-4f32-8d5c-7d9551d0f846/nova-metadata-log/0.log" Feb 04 09:51:48 crc kubenswrapper[4644]: I0204 09:51:48.263283 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/mysql-bootstrap/0.log" Feb 04 09:51:48 crc kubenswrapper[4644]: I0204 09:51:48.487727 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/galera/0.log" Feb 04 09:51:48 crc kubenswrapper[4644]: I0204 09:51:48.548597 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/mysql-bootstrap/0.log" Feb 04 09:51:48 crc kubenswrapper[4644]: I0204 09:51:48.574954 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9decc8da-612f-4d8e-9ec7-b3894e3456f5/nova-scheduler-scheduler/0.log" Feb 04 09:51:48 crc kubenswrapper[4644]: I0204 09:51:48.816023 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/mysql-bootstrap/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.041267 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/mysql-bootstrap/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.073861 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/galera/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.323845 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d8596785-f659-4038-ac9a-a48c9a4dbd44/openstackclient/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.494384 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bc0f95ed-7197-4f32-8d5c-7d9551d0f846/nova-metadata-metadata/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.503300 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8nfv7_964cdd6e-b29a-401d-9bb0-3375b663a899/ovn-controller/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.718561 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mb84g_b4cecbc7-4505-46d1-8ddb-4b454e614fb1/openstack-network-exporter/0.log" Feb 04 09:51:49 crc kubenswrapper[4644]: I0204 09:51:49.922315 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server-init/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.094109 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovs-vswitchd/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.101518 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server-init/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.200017 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.437691 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4lkt5_409ea25f-f243-4e2e-811a-2e887aad6ab8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.456863 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ea26d9-2316-4fe5-b998-ed9fa22e6a2a/openstack-network-exporter/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.542806 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ea26d9-2316-4fe5-b998-ed9fa22e6a2a/ovn-northd/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.753464 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f/openstack-network-exporter/0.log" Feb 04 09:51:50 crc kubenswrapper[4644]: I0204 09:51:50.806754 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f/ovsdbserver-nb/0.log" Feb 04 09:51:51 crc kubenswrapper[4644]: I0204 09:51:51.049318 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_73360e1e-70eb-499b-b3a1-cd9bde6ac466/ovsdbserver-sb/0.log" Feb 04 09:51:51 crc kubenswrapper[4644]: I0204 09:51:51.069094 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_73360e1e-70eb-499b-b3a1-cd9bde6ac466/openstack-network-exporter/0.log" Feb 04 09:51:51 crc kubenswrapper[4644]: I0204 09:51:51.421084 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fc7776988-rx9dz_fe4a7be8-11a8-4974-80dc-0893a6f9c104/placement-api/0.log" Feb 04 09:51:51 crc kubenswrapper[4644]: I0204 09:51:51.913365 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/setup-container/0.log" Feb 04 09:51:52 crc kubenswrapper[4644]: I0204 09:51:52.110786 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fc7776988-rx9dz_fe4a7be8-11a8-4974-80dc-0893a6f9c104/placement-log/0.log" Feb 04 09:51:52 crc kubenswrapper[4644]: I0204 09:51:52.331772 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/setup-container/0.log" Feb 04 09:51:52 crc kubenswrapper[4644]: I0204 09:51:52.350603 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/rabbitmq/0.log" Feb 04 09:51:52 crc kubenswrapper[4644]: I0204 09:51:52.462964 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/setup-container/0.log" Feb 04 09:51:53 crc kubenswrapper[4644]: I0204 09:51:53.440471 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/setup-container/0.log" Feb 04 09:51:53 crc kubenswrapper[4644]: I0204 09:51:53.449549 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6_c4d8e999-5063-4f94-a049-6566ecee94fb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:53 crc kubenswrapper[4644]: I0204 09:51:53.477546 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/rabbitmq/0.log" Feb 04 09:51:53 crc kubenswrapper[4644]: I0204 09:51:53.750190 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rjb5s_a2f175cb-68ae-4aa4-ad16-193a42aa579d/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:53 crc kubenswrapper[4644]: I0204 09:51:53.836303 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw_156d5fb6-7e66-4c46-b846-26d3344b8f05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:54 crc kubenswrapper[4644]: I0204 09:51:54.110671 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5gqkp_bf71221b-6b1b-4245-b080-346ef3c46902/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:54 crc kubenswrapper[4644]: I0204 09:51:54.113537 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9vf66_201d72e8-4479-464a-949d-53be692f0f9e/ssh-known-hosts-edpm-deployment/0.log" Feb 04 09:51:54 crc kubenswrapper[4644]: I0204 09:51:54.842171 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69d495f767-hzkrb_6a04a95b-5411-483c-a0de-408fa44500e0/proxy-server/0.log" Feb 04 09:51:54 crc kubenswrapper[4644]: I0204 09:51:54.871683 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69d495f767-hzkrb_6a04a95b-5411-483c-a0de-408fa44500e0/proxy-httpd/0.log" Feb 04 09:51:54 crc kubenswrapper[4644]: I0204 09:51:54.941441 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sltjx_5a843b53-7ea4-48d9-9c8a-16be734d66c6/swift-ring-rebalance/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.112515 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-auditor/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.235007 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-replicator/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.258283 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-reaper/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.463396 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-server/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.515647 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-auditor/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.599117 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-server/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.625583 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-replicator/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.851775 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-auditor/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.867194 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-updater/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.962956 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-replicator/0.log" Feb 04 09:51:55 crc kubenswrapper[4644]: I0204 09:51:55.986676 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-expirer/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.206061 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-server/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.238717 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-updater/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.275638 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/rsync/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.365839 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/swift-recon-cron/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.600105 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gv78k_feb1a5d9-f2df-4534-8a80-73d11c854b35/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.666109 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a/tempest-tests-tempest-tests-runner/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.870637 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2b70d9a5-0c99-4bca-b9e9-8212e140403a/test-operator-logs-container/0.log" Feb 04 09:51:56 crc kubenswrapper[4644]: I0204 09:51:56.983666 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7_42aaff39-4ff2-44b5-9770-56fc11241b30/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 09:52:09 crc kubenswrapper[4644]: I0204 09:52:09.413966 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c568fc8f-9f0b-496b-b39e-51ef99241e6e/memcached/0.log" Feb 04 09:52:31 crc kubenswrapper[4644]: I0204 09:52:31.989341 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-fhr46_86635827-026c-4145-9130-3c300da69963/manager/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.236422 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-sxbgc_65e46d7b-9b3f-447b-91da-35322d406623/manager/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.346838 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hwkc4_3bb04651-3f3e-4f0a-8822-11279a338e20/manager/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.506152 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.754606 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.755184 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.757043 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 09:52:32 crc kubenswrapper[4644]: I0204 09:52:32.923412 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.016841 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/extract/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.030681 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.233871 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-stnhl_362644b0-399b-4476-b8f7-9723011b9053/manager/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.323769 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-cln6d_b449c147-de4b-4503-b680-86e2a43715e2/manager/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.485264 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-g9w8f_718025b3-0dfa-4c50-a020-8fc030f6061c/manager/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.778393 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-pb5zg_b3816529-aae3-447c-b497-027d78669856/manager/0.log" Feb 04 09:52:33 crc kubenswrapper[4644]: I0204 09:52:33.872604 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6ldzh_af50abdc-12fd-4e29-b6ce-804f91e185f5/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.006758 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xmsgv_e9033b55-edfc-440d-bd2c-fa027d27f034/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.142461 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-t5sv7_f1aab4ac-082c-4c69-94c8-6291514178b7/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.301697 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-xw5rw_08ce9496-06f2-4a40-aac7-eaddbc4eb617/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.462690 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-6mv9v_1126de8e-d0ae-4d0d-a7d3-cad73f6cc672/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.622684 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v6q27_6f482e24-1f12-48bd-8944-93b1e7ee2d76/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.741762 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9n6pj_0d5154cd-bccf-4112-a9b5-df0cf8375905/manager/0.log" Feb 04 09:52:34 crc kubenswrapper[4644]: I0204 09:52:34.830579 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d777fx_d92e25ae-9963-4073-9b4e-66f4aafff7a6/manager/0.log" Feb 04 09:52:35 crc kubenswrapper[4644]: I0204 09:52:35.206292 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7779fb4444-rsl7v_9e804a34-fb91-4608-84f0-08283597694b/operator/0.log" Feb 04 09:52:35 crc kubenswrapper[4644]: I0204 09:52:35.501728 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-85gvc_fad001d0-1475-450d-97d9-714d13e42d37/registry-server/0.log" Feb 04 09:52:35 crc kubenswrapper[4644]: I0204 09:52:35.751616 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-hp2fd_dca5895b-8bfa-4060-a60d-79e37d0eefe6/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.042734 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-4r2z6_e6482c44-8c91-4931-aceb-b18c7418a6c4/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.214893 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vgccb_9e6331c7-8b94-4ded-92d0-e9db7bbd45ec/operator/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.519516 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-7jlm9_b74f9275-a7ff-4b5f-a6e1-3adff65c8a71/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.685012 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tc45m_bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.766261 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69b675f8c4-g2gnp_ddb47eef-c05a-40c3-8d94-dd9187b61267/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.826074 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-9msfm_8b00283c-6f66-489b-b929-bbd1a5706b67/manager/0.log" Feb 04 09:52:36 crc kubenswrapper[4644]: I0204 09:52:36.940018 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-8l8s8_277bd37d-6c35-4b57-b7bd-b6bb3f1043fe/manager/0.log" Feb 04 09:52:59 crc kubenswrapper[4644]: I0204 09:52:59.639722 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-htm2g_f846627e-2b5c-4fed-8898-e734c9dbce9b/control-plane-machine-set-operator/0.log" Feb 04 09:52:59 crc kubenswrapper[4644]: I0204 09:52:59.832856 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vw5x9_c09e24ca-d42d-4f59-9a19-83410a062bb1/kube-rbac-proxy/0.log" Feb 04 09:52:59 crc kubenswrapper[4644]: I0204 09:52:59.876406 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vw5x9_c09e24ca-d42d-4f59-9a19-83410a062bb1/machine-api-operator/0.log" Feb 04 09:53:13 crc kubenswrapper[4644]: I0204 09:53:13.349392 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mpbm5_ea2632db-c8cd-42a9-8f74-d989cf9f77a2/cert-manager-controller/0.log" Feb 04 09:53:13 crc kubenswrapper[4644]: I0204 09:53:13.527784 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-27drg_cac9d42c-34be-410d-aca7-2346943b13c6/cert-manager-cainjector/0.log" Feb 04 09:53:13 crc kubenswrapper[4644]: I0204 09:53:13.563984 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-nk2qt_ce66b184-f3af-4f9c-b86d-138993d4114b/cert-manager-webhook/0.log" Feb 04 09:53:27 crc kubenswrapper[4644]: I0204 09:53:27.306144 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6f874f9768-mhn4n_19ba6f84-da44-468a-bf88-2d5861308d59/nmstate-console-plugin/0.log" Feb 04 09:53:27 crc kubenswrapper[4644]: I0204 09:53:27.543829 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q22qq_736f2cd3-420f-4c26-91ad-acd900c9fa01/nmstate-handler/0.log" Feb 04 09:53:27 crc kubenswrapper[4644]: I0204 09:53:27.579263 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-q44mg_bf7f3412-56f2-4b59-bd63-86f748e1d27f/kube-rbac-proxy/0.log" Feb 04 09:53:27 crc kubenswrapper[4644]: I0204 09:53:27.689636 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-q44mg_bf7f3412-56f2-4b59-bd63-86f748e1d27f/nmstate-metrics/0.log" Feb 04 09:53:27 crc kubenswrapper[4644]: I0204 09:53:27.718804 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-57bf49857b-w2rnn_292e6d27-c5ff-4352-a25e-a8b40030e9e2/nmstate-operator/0.log" Feb 04 09:53:28 crc kubenswrapper[4644]: I0204 09:53:28.587234 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-bd5678b45-bzsxp_2e070277-6ff5-41d0-ade7-81a146232b83/nmstate-webhook/0.log" Feb 04 09:53:35 crc kubenswrapper[4644]: I0204 09:53:35.555018 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:53:35 crc kubenswrapper[4644]: I0204 09:53:35.555512 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.402299 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-z7zmw_ef11c1e1-54cf-4428-9a73-9a8eb183dde6/kube-rbac-proxy/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.429398 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-z7zmw_ef11c1e1-54cf-4428-9a73-9a8eb183dde6/controller/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.683114 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.888470 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.939886 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.965950 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 09:53:59 crc kubenswrapper[4644]: I0204 09:53:59.988354 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.175128 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.250691 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.298302 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.305510 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.456907 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.478403 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.561200 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/controller/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.579727 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.700897 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/frr-metrics/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.840395 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/kube-rbac-proxy/0.log" Feb 04 09:54:00 crc kubenswrapper[4644]: I0204 09:54:00.864799 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/kube-rbac-proxy-frr/0.log" Feb 04 09:54:01 crc kubenswrapper[4644]: I0204 09:54:01.057072 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/reloader/0.log" Feb 04 09:54:01 crc kubenswrapper[4644]: I0204 09:54:01.178960 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-97dfd4f9f-jcnsg_fd959e6b-00cf-4818-8b5a-0ad09c060e5e/frr-k8s-webhook-server/0.log" Feb 04 09:54:01 crc kubenswrapper[4644]: I0204 09:54:01.559003 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-668579b8df-dc2hb_880260a9-a2e8-463c-97ba-3b936f884d9d/manager/0.log" Feb 04 09:54:01 crc kubenswrapper[4644]: I0204 09:54:01.681438 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b86757d9b-m6f8p_e5e99bd5-408c-4369-bd40-b31bb61ffc43/webhook-server/0.log" Feb 04 09:54:01 crc kubenswrapper[4644]: I0204 09:54:01.897768 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-twwks_4496f888-8e49-4a88-b753-7f2d55dc317a/kube-rbac-proxy/0.log" Feb 04 09:54:02 crc kubenswrapper[4644]: I0204 09:54:02.211216 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/frr/0.log" Feb 04 09:54:02 crc kubenswrapper[4644]: I0204 09:54:02.448213 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-twwks_4496f888-8e49-4a88-b753-7f2d55dc317a/speaker/0.log" Feb 04 09:54:05 crc kubenswrapper[4644]: I0204 09:54:05.555311 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:54:05 crc kubenswrapper[4644]: I0204 09:54:05.555791 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.031880 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:14 crc kubenswrapper[4644]: E0204 09:54:14.034145 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609940be-f8ea-451d-af79-d4c04f154d5b" containerName="container-00" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.034177 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="609940be-f8ea-451d-af79-d4c04f154d5b" containerName="container-00" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.034429 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="609940be-f8ea-451d-af79-d4c04f154d5b" containerName="container-00" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.036143 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.086360 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.086538 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.086573 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpp7\" (UniqueName: \"kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.104321 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.188334 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.188517 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.188557 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpp7\" (UniqueName: \"kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.188975 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.188991 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.220505 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpp7\" (UniqueName: \"kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7\") pod \"community-operators-z4p52\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:14 crc kubenswrapper[4644]: I0204 09:54:14.357367 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:15 crc kubenswrapper[4644]: I0204 09:54:15.014494 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:15 crc kubenswrapper[4644]: I0204 09:54:15.908652 4644 generic.go:334] "Generic (PLEG): container finished" podID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerID="497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990" exitCode=0 Feb 04 09:54:15 crc kubenswrapper[4644]: I0204 09:54:15.909590 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerDied","Data":"497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990"} Feb 04 09:54:15 crc kubenswrapper[4644]: I0204 09:54:15.909640 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerStarted","Data":"88c8bc197d433a77f71b679c22aaad97cfc3a32755960f2415cfebea1ae622cb"} Feb 04 09:54:15 crc kubenswrapper[4644]: I0204 09:54:15.911452 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:54:16 crc kubenswrapper[4644]: I0204 09:54:16.919118 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerStarted","Data":"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7"} Feb 04 09:54:17 crc kubenswrapper[4644]: I0204 09:54:17.775042 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.093981 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.121229 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.139390 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.314570 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.315150 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.332902 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/extract/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.484449 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.756501 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.772367 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 09:54:18 crc kubenswrapper[4644]: I0204 09:54:18.804555 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.497794 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.507991 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/extract/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.599505 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.722001 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.856430 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.903595 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.917378 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.955215 4644 generic.go:334] "Generic (PLEG): container finished" podID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerID="d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7" exitCode=0 Feb 04 09:54:19 crc kubenswrapper[4644]: I0204 09:54:19.955257 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerDied","Data":"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7"} Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.146803 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.163412 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.361691 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.635788 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/registry-server/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.719864 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.777812 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.795055 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.964247 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerStarted","Data":"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291"} Feb 04 09:54:20 crc kubenswrapper[4644]: I0204 09:54:20.992428 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z4p52" podStartSLOduration=2.476535994 podStartE2EDuration="6.992407397s" podCreationTimestamp="2026-02-04 09:54:14 +0000 UTC" firstStartedPulling="2026-02-04 09:54:15.911186149 +0000 UTC m=+4365.951243904" lastFinishedPulling="2026-02-04 09:54:20.427057552 +0000 UTC m=+4370.467115307" observedRunningTime="2026-02-04 09:54:20.980527154 +0000 UTC m=+4371.020584909" watchObservedRunningTime="2026-02-04 09:54:20.992407397 +0000 UTC m=+4371.032465152" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.003598 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.106006 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.346142 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-utilities/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.660234 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-utilities/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.737633 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-content/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.790881 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-content/0.log" Feb 04 09:54:21 crc kubenswrapper[4644]: I0204 09:54:21.958532 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/registry-server/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.040303 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-content/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.041194 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/extract-utilities/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.135983 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z4p52_21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/registry-server/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.292776 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/2.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.375901 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/1.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.468776 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.636169 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.652985 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.688262 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 09:54:22 crc kubenswrapper[4644]: I0204 09:54:22.980225 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.014316 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.045682 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.138229 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/registry-server/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.295402 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.304867 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.305076 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.526742 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 09:54:23 crc kubenswrapper[4644]: I0204 09:54:23.587932 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 09:54:24 crc kubenswrapper[4644]: I0204 09:54:24.142086 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/registry-server/0.log" Feb 04 09:54:24 crc kubenswrapper[4644]: I0204 09:54:24.358422 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:24 crc kubenswrapper[4644]: I0204 09:54:24.358480 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:24 crc kubenswrapper[4644]: I0204 09:54:24.409212 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:34 crc kubenswrapper[4644]: I0204 09:54:34.409254 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:34 crc kubenswrapper[4644]: I0204 09:54:34.464103 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.081735 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z4p52" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="registry-server" containerID="cri-o://90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291" gracePeriod=2 Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.555304 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.555811 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.555880 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.557113 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.557193 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" gracePeriod=600 Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.572231 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.635216 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpp7\" (UniqueName: \"kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7\") pod \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.635287 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities\") pod \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.635378 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content\") pod \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\" (UID: \"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31\") " Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.637146 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities" (OuterVolumeSpecName: "utilities") pod "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" (UID: "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.640221 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.653294 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7" (OuterVolumeSpecName: "kube-api-access-4dpp7") pod "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" (UID: "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31"). InnerVolumeSpecName "kube-api-access-4dpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:54:35 crc kubenswrapper[4644]: E0204 09:54:35.694844 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.714532 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" (UID: "21e4c89e-bf1b-4b1e-8ee0-94af158f7c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.742857 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpp7\" (UniqueName: \"kubernetes.io/projected/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-kube-api-access-4dpp7\") on node \"crc\" DevicePath \"\"" Feb 04 09:54:35 crc kubenswrapper[4644]: I0204 09:54:35.742900 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.092381 4644 generic.go:334] "Generic (PLEG): container finished" podID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerID="90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291" exitCode=0 Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.092465 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4p52" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.092493 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerDied","Data":"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291"} Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.092934 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4p52" event={"ID":"21e4c89e-bf1b-4b1e-8ee0-94af158f7c31","Type":"ContainerDied","Data":"88c8bc197d433a77f71b679c22aaad97cfc3a32755960f2415cfebea1ae622cb"} Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.092987 4644 scope.go:117] "RemoveContainer" containerID="90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.096712 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" exitCode=0 Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.096748 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39"} Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.097081 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:54:36 crc kubenswrapper[4644]: E0204 09:54:36.097340 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.114619 4644 scope.go:117] "RemoveContainer" containerID="d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.150677 4644 scope.go:117] "RemoveContainer" containerID="497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.157679 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.184223 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z4p52"] Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.194843 4644 scope.go:117] "RemoveContainer" containerID="90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291" Feb 04 09:54:36 crc kubenswrapper[4644]: E0204 09:54:36.199443 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291\": container with ID starting with 90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291 not found: ID does not exist" containerID="90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.199483 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291"} err="failed to get container status \"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291\": rpc error: code = NotFound desc = could not find container \"90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291\": container with ID starting with 90edca843b841d76d1956ff5519e028cc7a97d60a5eaa1caa7b220012dfc1291 not found: ID does not exist" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.199511 4644 scope.go:117] "RemoveContainer" containerID="d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7" Feb 04 09:54:36 crc kubenswrapper[4644]: E0204 09:54:36.201773 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7\": container with ID starting with d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7 not found: ID does not exist" containerID="d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.201803 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7"} err="failed to get container status \"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7\": rpc error: code = NotFound desc = could not find container \"d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7\": container with ID starting with d1d5eeabb33376f96a158fc0a9ec79bbeb0181a0667034261e755f7c5811f4a7 not found: ID does not exist" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.201817 4644 scope.go:117] "RemoveContainer" containerID="497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990" Feb 04 09:54:36 crc kubenswrapper[4644]: E0204 09:54:36.202170 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990\": container with ID starting with 497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990 not found: ID does not exist" containerID="497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.202191 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990"} err="failed to get container status \"497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990\": rpc error: code = NotFound desc = could not find container \"497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990\": container with ID starting with 497dfebc4cfbd4d15945745c9c437cfca91c55c860c8a80e8b25dbe9c09fe990 not found: ID does not exist" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.202204 4644 scope.go:117] "RemoveContainer" containerID="f5129befbb7587fea7f550af34a5bec8229582d69cfb16b12c52b0244e3fafa0" Feb 04 09:54:36 crc kubenswrapper[4644]: I0204 09:54:36.670541 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" path="/var/lib/kubelet/pods/21e4c89e-bf1b-4b1e-8ee0-94af158f7c31/volumes" Feb 04 09:54:41 crc kubenswrapper[4644]: E0204 09:54:41.659026 4644 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.136:35078->38.102.83.136:44683: write tcp 38.102.83.136:35078->38.102.83.136:44683: write: connection reset by peer Feb 04 09:54:48 crc kubenswrapper[4644]: I0204 09:54:48.660273 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:54:48 crc kubenswrapper[4644]: E0204 09:54:48.661084 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:54:59 crc kubenswrapper[4644]: I0204 09:54:59.661297 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:54:59 crc kubenswrapper[4644]: E0204 09:54:59.662069 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:55:10 crc kubenswrapper[4644]: I0204 09:55:10.667400 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:55:10 crc kubenswrapper[4644]: E0204 09:55:10.669256 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:55:24 crc kubenswrapper[4644]: I0204 09:55:24.660269 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:55:24 crc kubenswrapper[4644]: E0204 09:55:24.661375 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:55:39 crc kubenswrapper[4644]: I0204 09:55:39.659448 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:55:39 crc kubenswrapper[4644]: E0204 09:55:39.660483 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:55:53 crc kubenswrapper[4644]: I0204 09:55:53.660193 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:55:53 crc kubenswrapper[4644]: E0204 09:55:53.660907 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:56:07 crc kubenswrapper[4644]: I0204 09:56:07.660736 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:56:07 crc kubenswrapper[4644]: E0204 09:56:07.661617 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:56:21 crc kubenswrapper[4644]: I0204 09:56:21.660180 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:56:21 crc kubenswrapper[4644]: E0204 09:56:21.661002 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:56:36 crc kubenswrapper[4644]: I0204 09:56:36.660628 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:56:36 crc kubenswrapper[4644]: E0204 09:56:36.662000 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:56:47 crc kubenswrapper[4644]: I0204 09:56:47.660659 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:56:47 crc kubenswrapper[4644]: E0204 09:56:47.662158 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:57:02 crc kubenswrapper[4644]: I0204 09:57:02.421922 4644 generic.go:334] "Generic (PLEG): container finished" podID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerID="e3c87ec462c813faad872c6fb4b126b4a8bb72dae01cb986f8d4c10217a015be" exitCode=0 Feb 04 09:57:02 crc kubenswrapper[4644]: I0204 09:57:02.421948 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" event={"ID":"812fc2d9-21b8-400c-aa00-f40d4b2bc39a","Type":"ContainerDied","Data":"e3c87ec462c813faad872c6fb4b126b4a8bb72dae01cb986f8d4c10217a015be"} Feb 04 09:57:02 crc kubenswrapper[4644]: I0204 09:57:02.424864 4644 scope.go:117] "RemoveContainer" containerID="e3c87ec462c813faad872c6fb4b126b4a8bb72dae01cb986f8d4c10217a015be" Feb 04 09:57:02 crc kubenswrapper[4644]: I0204 09:57:02.661524 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:57:02 crc kubenswrapper[4644]: E0204 09:57:02.661934 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:57:03 crc kubenswrapper[4644]: I0204 09:57:03.477405 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xc6qn_must-gather-9v5v9_812fc2d9-21b8-400c-aa00-f40d4b2bc39a/gather/0.log" Feb 04 09:57:13 crc kubenswrapper[4644]: I0204 09:57:13.655802 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xc6qn/must-gather-9v5v9"] Feb 04 09:57:13 crc kubenswrapper[4644]: I0204 09:57:13.664523 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="copy" containerID="cri-o://0a74981b3a6b72d64cd30f8741164bbac86a29f80a6dd02ac2ed8796067d384a" gracePeriod=2 Feb 04 09:57:13 crc kubenswrapper[4644]: I0204 09:57:13.669443 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xc6qn/must-gather-9v5v9"] Feb 04 09:57:13 crc kubenswrapper[4644]: I0204 09:57:13.813293 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xc6qn_must-gather-9v5v9_812fc2d9-21b8-400c-aa00-f40d4b2bc39a/copy/0.log" Feb 04 09:57:13 crc kubenswrapper[4644]: I0204 09:57:13.814483 4644 generic.go:334] "Generic (PLEG): container finished" podID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerID="0a74981b3a6b72d64cd30f8741164bbac86a29f80a6dd02ac2ed8796067d384a" exitCode=143 Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.121637 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xc6qn_must-gather-9v5v9_812fc2d9-21b8-400c-aa00-f40d4b2bc39a/copy/0.log" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.122048 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.210575 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output\") pod \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.211076 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcdg9\" (UniqueName: \"kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9\") pod \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\" (UID: \"812fc2d9-21b8-400c-aa00-f40d4b2bc39a\") " Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.400299 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "812fc2d9-21b8-400c-aa00-f40d4b2bc39a" (UID: "812fc2d9-21b8-400c-aa00-f40d4b2bc39a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.415873 4644 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.660013 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:57:14 crc kubenswrapper[4644]: E0204 09:57:14.660276 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.783477 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9" (OuterVolumeSpecName: "kube-api-access-mcdg9") pod "812fc2d9-21b8-400c-aa00-f40d4b2bc39a" (UID: "812fc2d9-21b8-400c-aa00-f40d4b2bc39a"). InnerVolumeSpecName "kube-api-access-mcdg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.825258 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcdg9\" (UniqueName: \"kubernetes.io/projected/812fc2d9-21b8-400c-aa00-f40d4b2bc39a-kube-api-access-mcdg9\") on node \"crc\" DevicePath \"\"" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.825777 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xc6qn_must-gather-9v5v9_812fc2d9-21b8-400c-aa00-f40d4b2bc39a/copy/0.log" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.826399 4644 scope.go:117] "RemoveContainer" containerID="0a74981b3a6b72d64cd30f8741164bbac86a29f80a6dd02ac2ed8796067d384a" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.826510 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xc6qn/must-gather-9v5v9" Feb 04 09:57:14 crc kubenswrapper[4644]: I0204 09:57:14.867428 4644 scope.go:117] "RemoveContainer" containerID="e3c87ec462c813faad872c6fb4b126b4a8bb72dae01cb986f8d4c10217a015be" Feb 04 09:57:16 crc kubenswrapper[4644]: I0204 09:57:16.670482 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" path="/var/lib/kubelet/pods/812fc2d9-21b8-400c-aa00-f40d4b2bc39a/volumes" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130260 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:17 crc kubenswrapper[4644]: E0204 09:57:17.130668 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="gather" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130682 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="gather" Feb 04 09:57:17 crc kubenswrapper[4644]: E0204 09:57:17.130701 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="extract-content" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130708 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="extract-content" Feb 04 09:57:17 crc kubenswrapper[4644]: E0204 09:57:17.130725 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="registry-server" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130732 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="registry-server" Feb 04 09:57:17 crc kubenswrapper[4644]: E0204 09:57:17.130741 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="copy" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130750 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="copy" Feb 04 09:57:17 crc kubenswrapper[4644]: E0204 09:57:17.130772 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="extract-utilities" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130779 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="extract-utilities" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130962 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="gather" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130973 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e4c89e-bf1b-4b1e-8ee0-94af158f7c31" containerName="registry-server" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.130983 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="812fc2d9-21b8-400c-aa00-f40d4b2bc39a" containerName="copy" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.132346 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.142779 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.272591 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vp6\" (UniqueName: \"kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.272669 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.272756 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.374610 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.374746 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6vp6\" (UniqueName: \"kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.374816 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.375170 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.375219 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.398051 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6vp6\" (UniqueName: \"kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6\") pod \"redhat-operators-2jmmz\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:17 crc kubenswrapper[4644]: I0204 09:57:17.478054 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:18 crc kubenswrapper[4644]: I0204 09:57:18.186001 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:18 crc kubenswrapper[4644]: I0204 09:57:18.857457 4644 generic.go:334] "Generic (PLEG): container finished" podID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerID="4e0ac2aa1f6cf4205314ac1a233853fa57fa14cfa2ff985b4cc6c98caf2c39eb" exitCode=0 Feb 04 09:57:18 crc kubenswrapper[4644]: I0204 09:57:18.857656 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerDied","Data":"4e0ac2aa1f6cf4205314ac1a233853fa57fa14cfa2ff985b4cc6c98caf2c39eb"} Feb 04 09:57:18 crc kubenswrapper[4644]: I0204 09:57:18.857844 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerStarted","Data":"afb07c1386d2694f4ea29962aa54608f668e9a606a5521b4bb88b25dcaa8af9d"} Feb 04 09:57:19 crc kubenswrapper[4644]: I0204 09:57:19.868763 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerStarted","Data":"74b797bc6b065d609c17e45ca654d9be7a3302e69f4cac951c985c13ce9475d8"} Feb 04 09:57:24 crc kubenswrapper[4644]: I0204 09:57:24.942931 4644 generic.go:334] "Generic (PLEG): container finished" podID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerID="74b797bc6b065d609c17e45ca654d9be7a3302e69f4cac951c985c13ce9475d8" exitCode=0 Feb 04 09:57:24 crc kubenswrapper[4644]: I0204 09:57:24.943032 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerDied","Data":"74b797bc6b065d609c17e45ca654d9be7a3302e69f4cac951c985c13ce9475d8"} Feb 04 09:57:26 crc kubenswrapper[4644]: I0204 09:57:26.961836 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerStarted","Data":"4fff5f3a068a9537a308556b251167ce59f9fd5dcd3e29a7729930134d5b972e"} Feb 04 09:57:26 crc kubenswrapper[4644]: I0204 09:57:26.990457 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2jmmz" podStartSLOduration=3.258389889 podStartE2EDuration="9.990437473s" podCreationTimestamp="2026-02-04 09:57:17 +0000 UTC" firstStartedPulling="2026-02-04 09:57:18.860567389 +0000 UTC m=+4548.900625144" lastFinishedPulling="2026-02-04 09:57:25.592614973 +0000 UTC m=+4555.632672728" observedRunningTime="2026-02-04 09:57:26.98295032 +0000 UTC m=+4557.023008075" watchObservedRunningTime="2026-02-04 09:57:26.990437473 +0000 UTC m=+4557.030495228" Feb 04 09:57:27 crc kubenswrapper[4644]: I0204 09:57:27.479406 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:27 crc kubenswrapper[4644]: I0204 09:57:27.480004 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:28 crc kubenswrapper[4644]: I0204 09:57:28.524152 4644 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2jmmz" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="registry-server" probeResult="failure" output=< Feb 04 09:57:28 crc kubenswrapper[4644]: timeout: failed to connect service ":50051" within 1s Feb 04 09:57:28 crc kubenswrapper[4644]: > Feb 04 09:57:29 crc kubenswrapper[4644]: I0204 09:57:29.659864 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:57:29 crc kubenswrapper[4644]: E0204 09:57:29.660483 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:57:37 crc kubenswrapper[4644]: I0204 09:57:37.531819 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:37 crc kubenswrapper[4644]: I0204 09:57:37.577778 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:37 crc kubenswrapper[4644]: I0204 09:57:37.774307 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:39 crc kubenswrapper[4644]: I0204 09:57:39.078123 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2jmmz" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="registry-server" containerID="cri-o://4fff5f3a068a9537a308556b251167ce59f9fd5dcd3e29a7729930134d5b972e" gracePeriod=2 Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.089036 4644 generic.go:334] "Generic (PLEG): container finished" podID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerID="4fff5f3a068a9537a308556b251167ce59f9fd5dcd3e29a7729930134d5b972e" exitCode=0 Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.089179 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerDied","Data":"4fff5f3a068a9537a308556b251167ce59f9fd5dcd3e29a7729930134d5b972e"} Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.089316 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jmmz" event={"ID":"3257afec-79c4-4643-8de1-2a27c2ddfe4d","Type":"ContainerDied","Data":"afb07c1386d2694f4ea29962aa54608f668e9a606a5521b4bb88b25dcaa8af9d"} Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.089386 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb07c1386d2694f4ea29962aa54608f668e9a606a5521b4bb88b25dcaa8af9d" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.092965 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.219047 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6vp6\" (UniqueName: \"kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6\") pod \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.219494 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content\") pod \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.219548 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities\") pod \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\" (UID: \"3257afec-79c4-4643-8de1-2a27c2ddfe4d\") " Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.220674 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities" (OuterVolumeSpecName: "utilities") pod "3257afec-79c4-4643-8de1-2a27c2ddfe4d" (UID: "3257afec-79c4-4643-8de1-2a27c2ddfe4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.237486 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6" (OuterVolumeSpecName: "kube-api-access-m6vp6") pod "3257afec-79c4-4643-8de1-2a27c2ddfe4d" (UID: "3257afec-79c4-4643-8de1-2a27c2ddfe4d"). InnerVolumeSpecName "kube-api-access-m6vp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.321697 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.321736 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6vp6\" (UniqueName: \"kubernetes.io/projected/3257afec-79c4-4643-8de1-2a27c2ddfe4d-kube-api-access-m6vp6\") on node \"crc\" DevicePath \"\"" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.357662 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3257afec-79c4-4643-8de1-2a27c2ddfe4d" (UID: "3257afec-79c4-4643-8de1-2a27c2ddfe4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 09:57:40 crc kubenswrapper[4644]: I0204 09:57:40.423532 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3257afec-79c4-4643-8de1-2a27c2ddfe4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 09:57:41 crc kubenswrapper[4644]: I0204 09:57:41.096954 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jmmz" Feb 04 09:57:41 crc kubenswrapper[4644]: I0204 09:57:41.122378 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:41 crc kubenswrapper[4644]: I0204 09:57:41.131874 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2jmmz"] Feb 04 09:57:42 crc kubenswrapper[4644]: I0204 09:57:42.671259 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" path="/var/lib/kubelet/pods/3257afec-79c4-4643-8de1-2a27c2ddfe4d/volumes" Feb 04 09:57:44 crc kubenswrapper[4644]: I0204 09:57:44.659782 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:57:44 crc kubenswrapper[4644]: E0204 09:57:44.660392 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:57:57 crc kubenswrapper[4644]: I0204 09:57:57.660812 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:57:57 crc kubenswrapper[4644]: E0204 09:57:57.661655 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:58:12 crc kubenswrapper[4644]: I0204 09:58:12.661085 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:58:12 crc kubenswrapper[4644]: E0204 09:58:12.661897 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:58:24 crc kubenswrapper[4644]: I0204 09:58:24.659941 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:58:24 crc kubenswrapper[4644]: E0204 09:58:24.660614 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:58:35 crc kubenswrapper[4644]: I0204 09:58:35.660055 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:58:35 crc kubenswrapper[4644]: E0204 09:58:35.660971 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:58:46 crc kubenswrapper[4644]: I0204 09:58:46.661259 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:58:46 crc kubenswrapper[4644]: E0204 09:58:46.662124 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:59:00 crc kubenswrapper[4644]: I0204 09:59:00.667016 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:59:00 crc kubenswrapper[4644]: E0204 09:59:00.668184 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:59:15 crc kubenswrapper[4644]: I0204 09:59:15.659857 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:59:15 crc kubenswrapper[4644]: E0204 09:59:15.660748 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:59:30 crc kubenswrapper[4644]: I0204 09:59:30.681496 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:59:30 crc kubenswrapper[4644]: E0204 09:59:30.682554 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 09:59:43 crc kubenswrapper[4644]: I0204 09:59:43.659526 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 09:59:44 crc kubenswrapper[4644]: I0204 09:59:44.190889 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab"} Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.151786 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 09:59:55 crc kubenswrapper[4644]: E0204 09:59:55.152634 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="extract-content" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.152647 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="extract-content" Feb 04 09:59:55 crc kubenswrapper[4644]: E0204 09:59:55.152666 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="extract-utilities" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.152672 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="extract-utilities" Feb 04 09:59:55 crc kubenswrapper[4644]: E0204 09:59:55.152703 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="registry-server" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.152709 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="registry-server" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.152894 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="3257afec-79c4-4643-8de1-2a27c2ddfe4d" containerName="registry-server" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.154227 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.165505 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.270470 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.270836 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tn7\" (UniqueName: \"kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.271023 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.373494 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.373624 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tn7\" (UniqueName: \"kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.373759 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.374002 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.374224 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.397730 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tn7\" (UniqueName: \"kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7\") pod \"certified-operators-rk8sf\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:55 crc kubenswrapper[4644]: I0204 09:59:55.474463 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 09:59:56 crc kubenswrapper[4644]: I0204 09:59:55.998736 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 09:59:56 crc kubenswrapper[4644]: I0204 09:59:56.298095 4644 generic.go:334] "Generic (PLEG): container finished" podID="9fde68d3-b614-4568-8396-5f940473d77f" containerID="d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1" exitCode=0 Feb 04 09:59:56 crc kubenswrapper[4644]: I0204 09:59:56.298211 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerDied","Data":"d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1"} Feb 04 09:59:56 crc kubenswrapper[4644]: I0204 09:59:56.298430 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerStarted","Data":"05eedc2819cb04ecfaec1ed9867fabe0be4a8ae717d302c1e447e591a2ad7204"} Feb 04 09:59:56 crc kubenswrapper[4644]: I0204 09:59:56.299903 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 09:59:58 crc kubenswrapper[4644]: I0204 09:59:58.316376 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerStarted","Data":"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e"} Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.203031 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd"] Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.204603 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.207917 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.219624 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.220414 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd"] Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.334705 4644 generic.go:334] "Generic (PLEG): container finished" podID="9fde68d3-b614-4568-8396-5f940473d77f" containerID="1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e" exitCode=0 Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.334741 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerDied","Data":"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e"} Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.366881 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxr4\" (UniqueName: \"kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.366991 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.367063 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.468336 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.468433 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.468546 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbxr4\" (UniqueName: \"kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.469522 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.474990 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.487185 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbxr4\" (UniqueName: \"kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4\") pod \"collect-profiles-29503320-rp7jd\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:00 crc kubenswrapper[4644]: I0204 10:00:00.522202 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.002592 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd"] Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.348804 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerStarted","Data":"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b"} Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.354460 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" event={"ID":"97d79b16-c208-4f84-8f60-e16351c68566","Type":"ContainerStarted","Data":"b5e38690f6031cba03efabf8fbe2a2e394796e85ea18d3b487896f5db9c45762"} Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.354512 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" event={"ID":"97d79b16-c208-4f84-8f60-e16351c68566","Type":"ContainerStarted","Data":"c84d8ba81938c7f899d54edadf04f98292056e75cebc29f126bfe635eabee512"} Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.377875 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rk8sf" podStartSLOduration=1.7779945110000002 podStartE2EDuration="6.377852376s" podCreationTimestamp="2026-02-04 09:59:55 +0000 UTC" firstStartedPulling="2026-02-04 09:59:56.299665371 +0000 UTC m=+4706.339723126" lastFinishedPulling="2026-02-04 10:00:00.899523236 +0000 UTC m=+4710.939580991" observedRunningTime="2026-02-04 10:00:01.370064825 +0000 UTC m=+4711.410122590" watchObservedRunningTime="2026-02-04 10:00:01.377852376 +0000 UTC m=+4711.417910141" Feb 04 10:00:01 crc kubenswrapper[4644]: I0204 10:00:01.400907 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" podStartSLOduration=1.400881372 podStartE2EDuration="1.400881372s" podCreationTimestamp="2026-02-04 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 10:00:01.387696264 +0000 UTC m=+4711.427754019" watchObservedRunningTime="2026-02-04 10:00:01.400881372 +0000 UTC m=+4711.440939127" Feb 04 10:00:03 crc kubenswrapper[4644]: I0204 10:00:03.374225 4644 generic.go:334] "Generic (PLEG): container finished" podID="97d79b16-c208-4f84-8f60-e16351c68566" containerID="b5e38690f6031cba03efabf8fbe2a2e394796e85ea18d3b487896f5db9c45762" exitCode=0 Feb 04 10:00:03 crc kubenswrapper[4644]: I0204 10:00:03.374300 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" event={"ID":"97d79b16-c208-4f84-8f60-e16351c68566","Type":"ContainerDied","Data":"b5e38690f6031cba03efabf8fbe2a2e394796e85ea18d3b487896f5db9c45762"} Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.730060 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.855012 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume\") pod \"97d79b16-c208-4f84-8f60-e16351c68566\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.855141 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbxr4\" (UniqueName: \"kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4\") pod \"97d79b16-c208-4f84-8f60-e16351c68566\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.855177 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume\") pod \"97d79b16-c208-4f84-8f60-e16351c68566\" (UID: \"97d79b16-c208-4f84-8f60-e16351c68566\") " Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.856098 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume" (OuterVolumeSpecName: "config-volume") pod "97d79b16-c208-4f84-8f60-e16351c68566" (UID: "97d79b16-c208-4f84-8f60-e16351c68566"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.872217 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4" (OuterVolumeSpecName: "kube-api-access-jbxr4") pod "97d79b16-c208-4f84-8f60-e16351c68566" (UID: "97d79b16-c208-4f84-8f60-e16351c68566"). InnerVolumeSpecName "kube-api-access-jbxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.894028 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97d79b16-c208-4f84-8f60-e16351c68566" (UID: "97d79b16-c208-4f84-8f60-e16351c68566"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.957826 4644 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97d79b16-c208-4f84-8f60-e16351c68566-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.958151 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbxr4\" (UniqueName: \"kubernetes.io/projected/97d79b16-c208-4f84-8f60-e16351c68566-kube-api-access-jbxr4\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:04 crc kubenswrapper[4644]: I0204 10:00:04.958165 4644 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97d79b16-c208-4f84-8f60-e16351c68566-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.394898 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" event={"ID":"97d79b16-c208-4f84-8f60-e16351c68566","Type":"ContainerDied","Data":"c84d8ba81938c7f899d54edadf04f98292056e75cebc29f126bfe635eabee512"} Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.395314 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c84d8ba81938c7f899d54edadf04f98292056e75cebc29f126bfe635eabee512" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.395520 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503320-rp7jd" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.475672 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.476061 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.533136 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.830035 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q"] Feb 04 10:00:05 crc kubenswrapper[4644]: I0204 10:00:05.837566 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503275-74j5q"] Feb 04 10:00:06 crc kubenswrapper[4644]: I0204 10:00:06.458458 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:06 crc kubenswrapper[4644]: I0204 10:00:06.515106 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 10:00:06 crc kubenswrapper[4644]: I0204 10:00:06.671486 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6e11cc-d0fb-4530-a876-ac4ce91abe97" path="/var/lib/kubelet/pods/7a6e11cc-d0fb-4530-a876-ac4ce91abe97/volumes" Feb 04 10:00:08 crc kubenswrapper[4644]: I0204 10:00:08.421473 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rk8sf" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="registry-server" containerID="cri-o://ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b" gracePeriod=2 Feb 04 10:00:08 crc kubenswrapper[4644]: I0204 10:00:08.922717 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.036721 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8tn7\" (UniqueName: \"kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7\") pod \"9fde68d3-b614-4568-8396-5f940473d77f\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.036826 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities\") pod \"9fde68d3-b614-4568-8396-5f940473d77f\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.036918 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content\") pod \"9fde68d3-b614-4568-8396-5f940473d77f\" (UID: \"9fde68d3-b614-4568-8396-5f940473d77f\") " Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.037595 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities" (OuterVolumeSpecName: "utilities") pod "9fde68d3-b614-4568-8396-5f940473d77f" (UID: "9fde68d3-b614-4568-8396-5f940473d77f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.041840 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7" (OuterVolumeSpecName: "kube-api-access-m8tn7") pod "9fde68d3-b614-4568-8396-5f940473d77f" (UID: "9fde68d3-b614-4568-8396-5f940473d77f"). InnerVolumeSpecName "kube-api-access-m8tn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.088656 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fde68d3-b614-4568-8396-5f940473d77f" (UID: "9fde68d3-b614-4568-8396-5f940473d77f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.138813 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.138852 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fde68d3-b614-4568-8396-5f940473d77f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.138863 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8tn7\" (UniqueName: \"kubernetes.io/projected/9fde68d3-b614-4568-8396-5f940473d77f-kube-api-access-m8tn7\") on node \"crc\" DevicePath \"\"" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.430743 4644 generic.go:334] "Generic (PLEG): container finished" podID="9fde68d3-b614-4568-8396-5f940473d77f" containerID="ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b" exitCode=0 Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.430805 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerDied","Data":"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b"} Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.430833 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk8sf" event={"ID":"9fde68d3-b614-4568-8396-5f940473d77f","Type":"ContainerDied","Data":"05eedc2819cb04ecfaec1ed9867fabe0be4a8ae717d302c1e447e591a2ad7204"} Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.430853 4644 scope.go:117] "RemoveContainer" containerID="ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.430882 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk8sf" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.467006 4644 scope.go:117] "RemoveContainer" containerID="1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.472283 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.499585 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rk8sf"] Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.515224 4644 scope.go:117] "RemoveContainer" containerID="d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.539618 4644 scope.go:117] "RemoveContainer" containerID="ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b" Feb 04 10:00:09 crc kubenswrapper[4644]: E0204 10:00:09.540115 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b\": container with ID starting with ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b not found: ID does not exist" containerID="ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.540212 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b"} err="failed to get container status \"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b\": rpc error: code = NotFound desc = could not find container \"ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b\": container with ID starting with ee74a519aac9e8d95bc1f8768dd457f8c09887c4de804f193028154fef51ab0b not found: ID does not exist" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.540282 4644 scope.go:117] "RemoveContainer" containerID="1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e" Feb 04 10:00:09 crc kubenswrapper[4644]: E0204 10:00:09.540530 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e\": container with ID starting with 1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e not found: ID does not exist" containerID="1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.540556 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e"} err="failed to get container status \"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e\": rpc error: code = NotFound desc = could not find container \"1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e\": container with ID starting with 1770680cb361982f82740a4ad70a2de726958e08bd5097665c7ccc654309c33e not found: ID does not exist" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.540573 4644 scope.go:117] "RemoveContainer" containerID="d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1" Feb 04 10:00:09 crc kubenswrapper[4644]: E0204 10:00:09.540870 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1\": container with ID starting with d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1 not found: ID does not exist" containerID="d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1" Feb 04 10:00:09 crc kubenswrapper[4644]: I0204 10:00:09.540949 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1"} err="failed to get container status \"d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1\": rpc error: code = NotFound desc = could not find container \"d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1\": container with ID starting with d31eddf944daaa8c80d5cf135ebef5dcf8fa3e15dc8d34046f94fe493ef28ca1 not found: ID does not exist" Feb 04 10:00:10 crc kubenswrapper[4644]: I0204 10:00:10.678619 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fde68d3-b614-4568-8396-5f940473d77f" path="/var/lib/kubelet/pods/9fde68d3-b614-4568-8396-5f940473d77f/volumes" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.871166 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8jgb/must-gather-99h5c"] Feb 04 10:00:17 crc kubenswrapper[4644]: E0204 10:00:17.872997 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="extract-utilities" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.882593 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="extract-utilities" Feb 04 10:00:17 crc kubenswrapper[4644]: E0204 10:00:17.882720 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="registry-server" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.882790 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="registry-server" Feb 04 10:00:17 crc kubenswrapper[4644]: E0204 10:00:17.882855 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d79b16-c208-4f84-8f60-e16351c68566" containerName="collect-profiles" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.882930 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d79b16-c208-4f84-8f60-e16351c68566" containerName="collect-profiles" Feb 04 10:00:17 crc kubenswrapper[4644]: E0204 10:00:17.883013 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="extract-content" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.883079 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="extract-content" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.883531 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d79b16-c208-4f84-8f60-e16351c68566" containerName="collect-profiles" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.883629 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fde68d3-b614-4568-8396-5f940473d77f" containerName="registry-server" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.884678 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.887242 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x8jgb"/"openshift-service-ca.crt" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.887737 4644 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x8jgb"/"kube-root-ca.crt" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.888001 4644 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x8jgb"/"default-dockercfg-m7dwv" Feb 04 10:00:17 crc kubenswrapper[4644]: I0204 10:00:17.908301 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x8jgb/must-gather-99h5c"] Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.013813 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56tp\" (UniqueName: \"kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.014012 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.116098 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.116249 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56tp\" (UniqueName: \"kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.117102 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.134562 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56tp\" (UniqueName: \"kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp\") pod \"must-gather-99h5c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.209318 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:00:18 crc kubenswrapper[4644]: I0204 10:00:18.478204 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x8jgb/must-gather-99h5c"] Feb 04 10:00:19 crc kubenswrapper[4644]: I0204 10:00:19.516158 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/must-gather-99h5c" event={"ID":"a256db3c-8355-4426-a515-28f0f8d3017c","Type":"ContainerStarted","Data":"464d67585ad70769402984c9e0ce13f1fc38768b1532dd10ac45aebcfaa43290"} Feb 04 10:00:19 crc kubenswrapper[4644]: I0204 10:00:19.516218 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/must-gather-99h5c" event={"ID":"a256db3c-8355-4426-a515-28f0f8d3017c","Type":"ContainerStarted","Data":"fe9dcedf62a748973827bc6019626c2acd9af8f71e1a561393664300aa95d0a5"} Feb 04 10:00:19 crc kubenswrapper[4644]: I0204 10:00:19.516231 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/must-gather-99h5c" event={"ID":"a256db3c-8355-4426-a515-28f0f8d3017c","Type":"ContainerStarted","Data":"c9a3742f2716ac20b2c2acec5048e73040235fa8f0a88d100526f70fea753196"} Feb 04 10:00:19 crc kubenswrapper[4644]: I0204 10:00:19.540138 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x8jgb/must-gather-99h5c" podStartSLOduration=2.54011124 podStartE2EDuration="2.54011124s" podCreationTimestamp="2026-02-04 10:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 10:00:19.533901951 +0000 UTC m=+4729.573959706" watchObservedRunningTime="2026-02-04 10:00:19.54011124 +0000 UTC m=+4729.580168995" Feb 04 10:00:21 crc kubenswrapper[4644]: E0204 10:00:21.881000 4644 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.136:57954->38.102.83.136:44683: read tcp 38.102.83.136:57954->38.102.83.136:44683: read: connection reset by peer Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.241141 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-qfz49"] Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.243158 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.335915 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8mf\" (UniqueName: \"kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.336320 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.438359 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8mf\" (UniqueName: \"kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.438489 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.438653 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.467373 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8mf\" (UniqueName: \"kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf\") pod \"crc-debug-qfz49\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:23 crc kubenswrapper[4644]: I0204 10:00:23.562762 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:00:24 crc kubenswrapper[4644]: I0204 10:00:24.563404 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" event={"ID":"4eda5d4e-f1c0-4007-9435-85a22220f885","Type":"ContainerStarted","Data":"84f34947e1f107521c9fccd94f1dd8cd61e818fd09c4bbb21096e8dee4c22368"} Feb 04 10:00:24 crc kubenswrapper[4644]: I0204 10:00:24.563895 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" event={"ID":"4eda5d4e-f1c0-4007-9435-85a22220f885","Type":"ContainerStarted","Data":"81a11eac56ddee32b36b299b6cf062203153e7837bc11edde6785801dc8f4984"} Feb 04 10:00:24 crc kubenswrapper[4644]: I0204 10:00:24.597804 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" podStartSLOduration=1.597787377 podStartE2EDuration="1.597787377s" podCreationTimestamp="2026-02-04 10:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 10:00:24.581973857 +0000 UTC m=+4734.622031622" watchObservedRunningTime="2026-02-04 10:00:24.597787377 +0000 UTC m=+4734.637845132" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.155654 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29503321-hcdr9"] Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.157458 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.178288 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503321-hcdr9"] Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.180899 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.181021 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.181074 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwd9\" (UniqueName: \"kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.181159 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.282763 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.282873 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwd9\" (UniqueName: \"kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.282969 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.283074 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.293227 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.293902 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.294274 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.303151 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwd9\" (UniqueName: \"kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9\") pod \"keystone-cron-29503321-hcdr9\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.476108 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:00 crc kubenswrapper[4644]: I0204 10:01:00.998224 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503321-hcdr9"] Feb 04 10:01:01 crc kubenswrapper[4644]: I0204 10:01:01.919640 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503321-hcdr9" event={"ID":"1f8f01e5-9606-44b0-8bf9-232f683a70ec","Type":"ContainerStarted","Data":"be0a5869d7740db12ba3535fe46346f2dc380dfdc23e0dcd3ac5d92dc5da2bd6"} Feb 04 10:01:01 crc kubenswrapper[4644]: I0204 10:01:01.920228 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503321-hcdr9" event={"ID":"1f8f01e5-9606-44b0-8bf9-232f683a70ec","Type":"ContainerStarted","Data":"20e1397575d1ed92b4f49a3d085b808014cc86ab21a5bc313b7311f3af56b35f"} Feb 04 10:01:01 crc kubenswrapper[4644]: I0204 10:01:01.940686 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29503321-hcdr9" podStartSLOduration=1.940664862 podStartE2EDuration="1.940664862s" podCreationTimestamp="2026-02-04 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 10:01:01.935936994 +0000 UTC m=+4771.975994749" watchObservedRunningTime="2026-02-04 10:01:01.940664862 +0000 UTC m=+4771.980722617" Feb 04 10:01:04 crc kubenswrapper[4644]: I0204 10:01:04.814006 4644 scope.go:117] "RemoveContainer" containerID="0ad5135fa6149d5f538f7a5743edd09defd33af817013785f7a9f2c12fe55b7c" Feb 04 10:01:09 crc kubenswrapper[4644]: I0204 10:01:09.985883 4644 generic.go:334] "Generic (PLEG): container finished" podID="1f8f01e5-9606-44b0-8bf9-232f683a70ec" containerID="be0a5869d7740db12ba3535fe46346f2dc380dfdc23e0dcd3ac5d92dc5da2bd6" exitCode=0 Feb 04 10:01:09 crc kubenswrapper[4644]: I0204 10:01:09.985966 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503321-hcdr9" event={"ID":"1f8f01e5-9606-44b0-8bf9-232f683a70ec","Type":"ContainerDied","Data":"be0a5869d7740db12ba3535fe46346f2dc380dfdc23e0dcd3ac5d92dc5da2bd6"} Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.442225 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.504218 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys\") pod \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.504293 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle\") pod \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.504340 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwd9\" (UniqueName: \"kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9\") pod \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.504397 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data\") pod \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\" (UID: \"1f8f01e5-9606-44b0-8bf9-232f683a70ec\") " Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.519682 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9" (OuterVolumeSpecName: "kube-api-access-8fwd9") pod "1f8f01e5-9606-44b0-8bf9-232f683a70ec" (UID: "1f8f01e5-9606-44b0-8bf9-232f683a70ec"). InnerVolumeSpecName "kube-api-access-8fwd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.519707 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1f8f01e5-9606-44b0-8bf9-232f683a70ec" (UID: "1f8f01e5-9606-44b0-8bf9-232f683a70ec"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.532191 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f8f01e5-9606-44b0-8bf9-232f683a70ec" (UID: "1f8f01e5-9606-44b0-8bf9-232f683a70ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.607433 4644 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.607478 4644 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.607501 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwd9\" (UniqueName: \"kubernetes.io/projected/1f8f01e5-9606-44b0-8bf9-232f683a70ec-kube-api-access-8fwd9\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.623425 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data" (OuterVolumeSpecName: "config-data") pod "1f8f01e5-9606-44b0-8bf9-232f683a70ec" (UID: "1f8f01e5-9606-44b0-8bf9-232f683a70ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 10:01:11 crc kubenswrapper[4644]: I0204 10:01:11.709816 4644 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8f01e5-9606-44b0-8bf9-232f683a70ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:12 crc kubenswrapper[4644]: I0204 10:01:12.063530 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503321-hcdr9" event={"ID":"1f8f01e5-9606-44b0-8bf9-232f683a70ec","Type":"ContainerDied","Data":"20e1397575d1ed92b4f49a3d085b808014cc86ab21a5bc313b7311f3af56b35f"} Feb 04 10:01:12 crc kubenswrapper[4644]: I0204 10:01:12.063569 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e1397575d1ed92b4f49a3d085b808014cc86ab21a5bc313b7311f3af56b35f" Feb 04 10:01:12 crc kubenswrapper[4644]: I0204 10:01:12.063541 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503321-hcdr9" Feb 04 10:01:12 crc kubenswrapper[4644]: I0204 10:01:12.077741 4644 generic.go:334] "Generic (PLEG): container finished" podID="4eda5d4e-f1c0-4007-9435-85a22220f885" containerID="84f34947e1f107521c9fccd94f1dd8cd61e818fd09c4bbb21096e8dee4c22368" exitCode=0 Feb 04 10:01:12 crc kubenswrapper[4644]: I0204 10:01:12.077778 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" event={"ID":"4eda5d4e-f1c0-4007-9435-85a22220f885","Type":"ContainerDied","Data":"84f34947e1f107521c9fccd94f1dd8cd61e818fd09c4bbb21096e8dee4c22368"} Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.188400 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.237492 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-qfz49"] Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.248444 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-qfz49"] Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.248762 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host\") pod \"4eda5d4e-f1c0-4007-9435-85a22220f885\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.248831 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host" (OuterVolumeSpecName: "host") pod "4eda5d4e-f1c0-4007-9435-85a22220f885" (UID: "4eda5d4e-f1c0-4007-9435-85a22220f885"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.248949 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t8mf\" (UniqueName: \"kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf\") pod \"4eda5d4e-f1c0-4007-9435-85a22220f885\" (UID: \"4eda5d4e-f1c0-4007-9435-85a22220f885\") " Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.249518 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4eda5d4e-f1c0-4007-9435-85a22220f885-host\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.253748 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf" (OuterVolumeSpecName: "kube-api-access-8t8mf") pod "4eda5d4e-f1c0-4007-9435-85a22220f885" (UID: "4eda5d4e-f1c0-4007-9435-85a22220f885"). InnerVolumeSpecName "kube-api-access-8t8mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:01:13 crc kubenswrapper[4644]: I0204 10:01:13.351526 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t8mf\" (UniqueName: \"kubernetes.io/projected/4eda5d4e-f1c0-4007-9435-85a22220f885-kube-api-access-8t8mf\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.102267 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a11eac56ddee32b36b299b6cf062203153e7837bc11edde6785801dc8f4984" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.102738 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-qfz49" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.672116 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eda5d4e-f1c0-4007-9435-85a22220f885" path="/var/lib/kubelet/pods/4eda5d4e-f1c0-4007-9435-85a22220f885/volumes" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.709557 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-sqmhq"] Feb 04 10:01:14 crc kubenswrapper[4644]: E0204 10:01:14.710220 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8f01e5-9606-44b0-8bf9-232f683a70ec" containerName="keystone-cron" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.710253 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8f01e5-9606-44b0-8bf9-232f683a70ec" containerName="keystone-cron" Feb 04 10:01:14 crc kubenswrapper[4644]: E0204 10:01:14.710281 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5d4e-f1c0-4007-9435-85a22220f885" containerName="container-00" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.710291 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5d4e-f1c0-4007-9435-85a22220f885" containerName="container-00" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.710603 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8f01e5-9606-44b0-8bf9-232f683a70ec" containerName="keystone-cron" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.710634 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5d4e-f1c0-4007-9435-85a22220f885" containerName="container-00" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.712421 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.780893 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.781513 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdbt\" (UniqueName: \"kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.883169 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtdbt\" (UniqueName: \"kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.883630 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.883865 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:14 crc kubenswrapper[4644]: I0204 10:01:14.925232 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtdbt\" (UniqueName: \"kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt\") pod \"crc-debug-sqmhq\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:15 crc kubenswrapper[4644]: I0204 10:01:15.032348 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:15 crc kubenswrapper[4644]: I0204 10:01:15.121843 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" event={"ID":"7a615ced-3c4a-499c-b3be-28afda010d27","Type":"ContainerStarted","Data":"993cae954bfb67ffe628aa2a059c94ef762690216529be3c91cddf527c4d4ae4"} Feb 04 10:01:16 crc kubenswrapper[4644]: I0204 10:01:16.130443 4644 generic.go:334] "Generic (PLEG): container finished" podID="7a615ced-3c4a-499c-b3be-28afda010d27" containerID="805b85768182bcd9f3932a3e7fe3a0b6150abde91738f0e30a284621c8d52851" exitCode=0 Feb 04 10:01:16 crc kubenswrapper[4644]: I0204 10:01:16.130517 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" event={"ID":"7a615ced-3c4a-499c-b3be-28afda010d27","Type":"ContainerDied","Data":"805b85768182bcd9f3932a3e7fe3a0b6150abde91738f0e30a284621c8d52851"} Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.282503 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.328973 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host\") pod \"7a615ced-3c4a-499c-b3be-28afda010d27\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.329106 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host" (OuterVolumeSpecName: "host") pod "7a615ced-3c4a-499c-b3be-28afda010d27" (UID: "7a615ced-3c4a-499c-b3be-28afda010d27"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.329246 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtdbt\" (UniqueName: \"kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt\") pod \"7a615ced-3c4a-499c-b3be-28afda010d27\" (UID: \"7a615ced-3c4a-499c-b3be-28afda010d27\") " Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.329812 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a615ced-3c4a-499c-b3be-28afda010d27-host\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.356037 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt" (OuterVolumeSpecName: "kube-api-access-qtdbt") pod "7a615ced-3c4a-499c-b3be-28afda010d27" (UID: "7a615ced-3c4a-499c-b3be-28afda010d27"). InnerVolumeSpecName "kube-api-access-qtdbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:01:17 crc kubenswrapper[4644]: I0204 10:01:17.432129 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtdbt\" (UniqueName: \"kubernetes.io/projected/7a615ced-3c4a-499c-b3be-28afda010d27-kube-api-access-qtdbt\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.155796 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" event={"ID":"7a615ced-3c4a-499c-b3be-28afda010d27","Type":"ContainerDied","Data":"993cae954bfb67ffe628aa2a059c94ef762690216529be3c91cddf527c4d4ae4"} Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.155840 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993cae954bfb67ffe628aa2a059c94ef762690216529be3c91cddf527c4d4ae4" Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.155903 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-sqmhq" Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.382791 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-sqmhq"] Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.390558 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-sqmhq"] Feb 04 10:01:18 crc kubenswrapper[4644]: I0204 10:01:18.669806 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a615ced-3c4a-499c-b3be-28afda010d27" path="/var/lib/kubelet/pods/7a615ced-3c4a-499c-b3be-28afda010d27/volumes" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.639685 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-7gl7n"] Feb 04 10:01:19 crc kubenswrapper[4644]: E0204 10:01:19.640155 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a615ced-3c4a-499c-b3be-28afda010d27" containerName="container-00" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.640171 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a615ced-3c4a-499c-b3be-28afda010d27" containerName="container-00" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.640409 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a615ced-3c4a-499c-b3be-28afda010d27" containerName="container-00" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.641202 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.674564 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pl9\" (UniqueName: \"kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.674982 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.776414 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82pl9\" (UniqueName: \"kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.776535 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.776961 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.805978 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82pl9\" (UniqueName: \"kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9\") pod \"crc-debug-7gl7n\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:19 crc kubenswrapper[4644]: I0204 10:01:19.960653 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:20 crc kubenswrapper[4644]: W0204 10:01:20.007081 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379ae498_b0e3_428c_9b5b_df882df0feb8.slice/crio-0245469bdaf6be3435dfe89fa35f28f3a041eb18c26272037dd1b3fb6afe7b5c WatchSource:0}: Error finding container 0245469bdaf6be3435dfe89fa35f28f3a041eb18c26272037dd1b3fb6afe7b5c: Status 404 returned error can't find the container with id 0245469bdaf6be3435dfe89fa35f28f3a041eb18c26272037dd1b3fb6afe7b5c Feb 04 10:01:20 crc kubenswrapper[4644]: I0204 10:01:20.172956 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" event={"ID":"379ae498-b0e3-428c-9b5b-df882df0feb8","Type":"ContainerStarted","Data":"0245469bdaf6be3435dfe89fa35f28f3a041eb18c26272037dd1b3fb6afe7b5c"} Feb 04 10:01:21 crc kubenswrapper[4644]: I0204 10:01:21.183398 4644 generic.go:334] "Generic (PLEG): container finished" podID="379ae498-b0e3-428c-9b5b-df882df0feb8" containerID="6b756b983279934615d887d4f6d287b3eae63d49c98ebf1766927d8c8b6e6a1d" exitCode=0 Feb 04 10:01:21 crc kubenswrapper[4644]: I0204 10:01:21.183557 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" event={"ID":"379ae498-b0e3-428c-9b5b-df882df0feb8","Type":"ContainerDied","Data":"6b756b983279934615d887d4f6d287b3eae63d49c98ebf1766927d8c8b6e6a1d"} Feb 04 10:01:21 crc kubenswrapper[4644]: I0204 10:01:21.219187 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-7gl7n"] Feb 04 10:01:21 crc kubenswrapper[4644]: I0204 10:01:21.228736 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8jgb/crc-debug-7gl7n"] Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.287907 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.323723 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82pl9\" (UniqueName: \"kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9\") pod \"379ae498-b0e3-428c-9b5b-df882df0feb8\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.324100 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host\") pod \"379ae498-b0e3-428c-9b5b-df882df0feb8\" (UID: \"379ae498-b0e3-428c-9b5b-df882df0feb8\") " Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.324724 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host" (OuterVolumeSpecName: "host") pod "379ae498-b0e3-428c-9b5b-df882df0feb8" (UID: "379ae498-b0e3-428c-9b5b-df882df0feb8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.329810 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9" (OuterVolumeSpecName: "kube-api-access-82pl9") pod "379ae498-b0e3-428c-9b5b-df882df0feb8" (UID: "379ae498-b0e3-428c-9b5b-df882df0feb8"). InnerVolumeSpecName "kube-api-access-82pl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.427147 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82pl9\" (UniqueName: \"kubernetes.io/projected/379ae498-b0e3-428c-9b5b-df882df0feb8-kube-api-access-82pl9\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.427194 4644 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/379ae498-b0e3-428c-9b5b-df882df0feb8-host\") on node \"crc\" DevicePath \"\"" Feb 04 10:01:22 crc kubenswrapper[4644]: I0204 10:01:22.672615 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379ae498-b0e3-428c-9b5b-df882df0feb8" path="/var/lib/kubelet/pods/379ae498-b0e3-428c-9b5b-df882df0feb8/volumes" Feb 04 10:01:23 crc kubenswrapper[4644]: I0204 10:01:23.202189 4644 scope.go:117] "RemoveContainer" containerID="6b756b983279934615d887d4f6d287b3eae63d49c98ebf1766927d8c8b6e6a1d" Feb 04 10:01:23 crc kubenswrapper[4644]: I0204 10:01:23.202362 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/crc-debug-7gl7n" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.068230 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-558bf4756b-n2g7b_9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7/barbican-api/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.217162 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-558bf4756b-n2g7b_9ad3de3f-bb6c-4b89-bfc2-a32ce5e793b7/barbican-api-log/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.291584 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-866db59d-m5kdr_f0a0f2d9-bd63-4dc5-826c-5d67f92a31da/barbican-keystone-listener/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.402860 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-866db59d-m5kdr_f0a0f2d9-bd63-4dc5-826c-5d67f92a31da/barbican-keystone-listener-log/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.584766 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-747c75f8c-ljgzl_be2eab6d-9a04-400b-baa9-c20fe5fcd269/barbican-worker/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.588244 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-747c75f8c-ljgzl_be2eab6d-9a04-400b-baa9-c20fe5fcd269/barbican-worker-log/0.log" Feb 04 10:02:03 crc kubenswrapper[4644]: I0204 10:02:03.787590 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5nlk4_308d165a-5458-4e82-936c-b7a25ebfcbe6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.002719 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/ceilometer-central-agent/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.053659 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/ceilometer-notification-agent/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.096863 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/proxy-httpd/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.128030 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e9334f7-37d6-49f6-9c7f-e5b301283f15/sg-core/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.317628 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4109caeb-65a7-4c6b-b09c-83da593a1ef2/cinder-api-log/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.410674 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4109caeb-65a7-4c6b-b09c-83da593a1ef2/cinder-api/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.617700 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e01886c2-fe24-4f65-9ace-d48998f27c65/probe/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.649267 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e01886c2-fe24-4f65-9ace-d48998f27c65/cinder-scheduler/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.742929 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jzbgw_54674fd4-5080-4cea-8cf9-7c6bbd9c53de/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.860532 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cwt6z_10140326-561b-48b8-8746-576a83f36c12/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:04 crc kubenswrapper[4644]: I0204 10:02:04.986648 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/init/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.202223 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/init/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.405263 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n4g7w_01ef2c4c-6c2c-4a49-8fd7-5b7d34555d58/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.511579 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-fk6fk_a581143f-dc8c-4226-a36c-5ece09be2e6f/dnsmasq-dns/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.555562 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.555798 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.645961 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c737ef12-0ce6-47d8-9773-0244eff8200b/glance-log/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.715878 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c737ef12-0ce6-47d8-9773-0244eff8200b/glance-httpd/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.768154 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1fda6114-8d44-49ba-b30e-8ce9233f4b33/glance-httpd/0.log" Feb 04 10:02:05 crc kubenswrapper[4644]: I0204 10:02:05.837690 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1fda6114-8d44-49ba-b30e-8ce9233f4b33/glance-log/0.log" Feb 04 10:02:06 crc kubenswrapper[4644]: I0204 10:02:06.105657 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon/1.log" Feb 04 10:02:06 crc kubenswrapper[4644]: I0204 10:02:06.206710 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon/0.log" Feb 04 10:02:06 crc kubenswrapper[4644]: I0204 10:02:06.473507 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s75nr_11062f5d-3dd1-4087-9ea2-1b32fee5526c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:06 crc kubenswrapper[4644]: I0204 10:02:06.574920 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658bfcb544-88gj4_676db25f-e0ad-48cc-af2c-88029d6eb80d/horizon-log/0.log" Feb 04 10:02:06 crc kubenswrapper[4644]: I0204 10:02:06.630146 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bzzbj_110ef1d0-ffbc-4356-9c1f-169889312eef/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:07 crc kubenswrapper[4644]: I0204 10:02:07.201912 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766cbd9f4b-bj8dc_1fa2a049-f943-48c9-b4c2-09c2cd5decc2/keystone-api/0.log" Feb 04 10:02:07 crc kubenswrapper[4644]: I0204 10:02:07.273127 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29503261-bsnnv_cc72f9f7-839f-402b-9576-e9daf7ed4d5b/keystone-cron/0.log" Feb 04 10:02:07 crc kubenswrapper[4644]: I0204 10:02:07.435287 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29503321-hcdr9_1f8f01e5-9606-44b0-8bf9-232f683a70ec/keystone-cron/0.log" Feb 04 10:02:07 crc kubenswrapper[4644]: I0204 10:02:07.527850 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_26059c78-ccf4-418d-9012-40eb6cc5ba6f/kube-state-metrics/0.log" Feb 04 10:02:07 crc kubenswrapper[4644]: I0204 10:02:07.684773 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b44ph_5a074a3e-62ea-4cb2-96f3-ccce51518ad3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:08 crc kubenswrapper[4644]: I0204 10:02:08.486682 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-6q6mv_da0998d9-9cc2-4e46-ac4f-f47ec801a998/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:08 crc kubenswrapper[4644]: I0204 10:02:08.553048 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cdfd666b9-jkzcm_e05bc597-36c9-492b-abb4-45edb814eed5/neutron-httpd/0.log" Feb 04 10:02:08 crc kubenswrapper[4644]: I0204 10:02:08.902579 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cdfd666b9-jkzcm_e05bc597-36c9-492b-abb4-45edb814eed5/neutron-api/0.log" Feb 04 10:02:09 crc kubenswrapper[4644]: I0204 10:02:09.587194 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_75651f7d-0816-4090-bcd8-0c20fd5660bd/nova-cell0-conductor-conductor/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.077643 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_96b76067-7c3f-44cb-8d2a-0bbb04035d9c/nova-api-log/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.228853 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_36639dbd-0602-44cf-a535-51d69170e6c5/nova-cell1-conductor-conductor/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.523750 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3ed25922-57d7-4a67-828a-6a07c733ba91/nova-cell1-novncproxy-novncproxy/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.617217 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wbclq_7446c79e-b931-43ae-85a0-f21ab513e5e7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.655934 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_96b76067-7c3f-44cb-8d2a-0bbb04035d9c/nova-api-api/0.log" Feb 04 10:02:10 crc kubenswrapper[4644]: I0204 10:02:10.911412 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bc0f95ed-7197-4f32-8d5c-7d9551d0f846/nova-metadata-log/0.log" Feb 04 10:02:11 crc kubenswrapper[4644]: I0204 10:02:11.252587 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/mysql-bootstrap/0.log" Feb 04 10:02:11 crc kubenswrapper[4644]: I0204 10:02:11.522492 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9decc8da-612f-4d8e-9ec7-b3894e3456f5/nova-scheduler-scheduler/0.log" Feb 04 10:02:11 crc kubenswrapper[4644]: I0204 10:02:11.565036 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/mysql-bootstrap/0.log" Feb 04 10:02:11 crc kubenswrapper[4644]: I0204 10:02:11.580617 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4536ebcc-8962-4cf4-9cae-5db170118156/galera/0.log" Feb 04 10:02:11 crc kubenswrapper[4644]: I0204 10:02:11.814709 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/mysql-bootstrap/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.046437 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/mysql-bootstrap/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.085153 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9bf50d46-1c85-4db8-9887-f30f832212c1/galera/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.366854 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d8596785-f659-4038-ac9a-a48c9a4dbd44/openstackclient/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.519852 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8nfv7_964cdd6e-b29a-401d-9bb0-3375b663a899/ovn-controller/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.604768 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mb84g_b4cecbc7-4505-46d1-8ddb-4b454e614fb1/openstack-network-exporter/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.873191 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bc0f95ed-7197-4f32-8d5c-7d9551d0f846/nova-metadata-metadata/0.log" Feb 04 10:02:12 crc kubenswrapper[4644]: I0204 10:02:12.925968 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server-init/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.184366 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server-init/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.227123 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovs-vswitchd/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.264921 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzhbv_a5cee1f7-2917-47fe-95ac-96b0d9c502b7/ovsdb-server/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.491873 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4lkt5_409ea25f-f243-4e2e-811a-2e887aad6ab8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.602045 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ea26d9-2316-4fe5-b998-ed9fa22e6a2a/ovn-northd/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.614169 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_92ea26d9-2316-4fe5-b998-ed9fa22e6a2a/openstack-network-exporter/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.883037 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f/ovsdbserver-nb/0.log" Feb 04 10:02:13 crc kubenswrapper[4644]: I0204 10:02:13.899124 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f3a6583-1f9f-45ab-a36b-aa5e9ac77c1f/openstack-network-exporter/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.187000 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_73360e1e-70eb-499b-b3a1-cd9bde6ac466/ovsdbserver-sb/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.223477 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_73360e1e-70eb-499b-b3a1-cd9bde6ac466/openstack-network-exporter/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.535239 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/setup-container/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.575021 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fc7776988-rx9dz_fe4a7be8-11a8-4974-80dc-0893a6f9c104/placement-api/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.763753 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fc7776988-rx9dz_fe4a7be8-11a8-4974-80dc-0893a6f9c104/placement-log/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.777770 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/setup-container/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.799734 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ccc5a46e-238d-43d7-9d48-311b21c76326/rabbitmq/0.log" Feb 04 10:02:14 crc kubenswrapper[4644]: I0204 10:02:14.971549 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/setup-container/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.342722 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wqzt6_c4d8e999-5063-4f94-a049-6566ecee94fb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.345832 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/setup-container/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.374599 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ca7a0ec9-ff74-4989-b66e-29bfc47bc73d/rabbitmq/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.660607 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rjb5s_a2f175cb-68ae-4aa4-ad16-193a42aa579d/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.703662 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9xxjw_156d5fb6-7e66-4c46-b846-26d3344b8f05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:15 crc kubenswrapper[4644]: I0204 10:02:15.927153 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5gqkp_bf71221b-6b1b-4245-b080-346ef3c46902/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.089439 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9vf66_201d72e8-4479-464a-949d-53be692f0f9e/ssh-known-hosts-edpm-deployment/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.433407 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69d495f767-hzkrb_6a04a95b-5411-483c-a0de-408fa44500e0/proxy-server/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.528494 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69d495f767-hzkrb_6a04a95b-5411-483c-a0de-408fa44500e0/proxy-httpd/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.554557 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sltjx_5a843b53-7ea4-48d9-9c8a-16be734d66c6/swift-ring-rebalance/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.832608 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-reaper/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.884639 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-auditor/0.log" Feb 04 10:02:16 crc kubenswrapper[4644]: I0204 10:02:16.899916 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-replicator/0.log" Feb 04 10:02:17 crc kubenswrapper[4644]: I0204 10:02:17.765933 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/account-server/0.log" Feb 04 10:02:17 crc kubenswrapper[4644]: I0204 10:02:17.780868 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-replicator/0.log" Feb 04 10:02:17 crc kubenswrapper[4644]: I0204 10:02:17.794183 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-server/0.log" Feb 04 10:02:17 crc kubenswrapper[4644]: I0204 10:02:17.804512 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-auditor/0.log" Feb 04 10:02:17 crc kubenswrapper[4644]: I0204 10:02:17.977708 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/container-updater/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.064593 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-expirer/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.118361 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-auditor/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.120284 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-replicator/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.324655 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-updater/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.363066 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/object-server/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.383206 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/swift-recon-cron/0.log" Feb 04 10:02:18 crc kubenswrapper[4644]: I0204 10:02:18.401276 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1344aa43-93ef-4780-a56d-3eb89d55b1a2/rsync/0.log" Feb 04 10:02:19 crc kubenswrapper[4644]: I0204 10:02:19.272035 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_60a4ca21-d05e-44f2-903b-f8c6d2eb4f9a/tempest-tests-tempest-tests-runner/0.log" Feb 04 10:02:19 crc kubenswrapper[4644]: I0204 10:02:19.298785 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gv78k_feb1a5d9-f2df-4534-8a80-73d11c854b35/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:19 crc kubenswrapper[4644]: I0204 10:02:19.403183 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2b70d9a5-0c99-4bca-b9e9-8212e140403a/test-operator-logs-container/0.log" Feb 04 10:02:19 crc kubenswrapper[4644]: I0204 10:02:19.589741 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xcsl7_42aaff39-4ff2-44b5-9770-56fc11241b30/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.537208 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:35 crc kubenswrapper[4644]: E0204 10:02:35.538680 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379ae498-b0e3-428c-9b5b-df882df0feb8" containerName="container-00" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.538771 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="379ae498-b0e3-428c-9b5b-df882df0feb8" containerName="container-00" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.539265 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="379ae498-b0e3-428c-9b5b-df882df0feb8" containerName="container-00" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.543783 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.559259 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.559318 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.561191 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.597761 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c568fc8f-9f0b-496b-b39e-51ef99241e6e/memcached/0.log" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.622393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vsg6\" (UniqueName: \"kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.622437 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.622528 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.723768 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vsg6\" (UniqueName: \"kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.723810 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.723974 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.725130 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.725421 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.752664 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vsg6\" (UniqueName: \"kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6\") pod \"redhat-marketplace-qhm65\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:35 crc kubenswrapper[4644]: I0204 10:02:35.882675 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:36 crc kubenswrapper[4644]: I0204 10:02:36.500999 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:37 crc kubenswrapper[4644]: I0204 10:02:37.877914 4644 generic.go:334] "Generic (PLEG): container finished" podID="af5539e5-757c-42e7-873d-028ade6b2185" containerID="5c0164a6b2873554b080ed7b763e3d470078270fa1a94ff8fe109586d66ee527" exitCode=0 Feb 04 10:02:37 crc kubenswrapper[4644]: I0204 10:02:37.877967 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerDied","Data":"5c0164a6b2873554b080ed7b763e3d470078270fa1a94ff8fe109586d66ee527"} Feb 04 10:02:37 crc kubenswrapper[4644]: I0204 10:02:37.878242 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerStarted","Data":"ba2a2fcf12938532180da366a808278f957c4e50331794c611cfde8224f57d72"} Feb 04 10:02:39 crc kubenswrapper[4644]: I0204 10:02:39.930615 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerStarted","Data":"b8398cefc6ffaa7207a130e9989dbc2f9ffcaf437bbee5f78bee58e7363b848f"} Feb 04 10:02:41 crc kubenswrapper[4644]: I0204 10:02:41.950061 4644 generic.go:334] "Generic (PLEG): container finished" podID="af5539e5-757c-42e7-873d-028ade6b2185" containerID="b8398cefc6ffaa7207a130e9989dbc2f9ffcaf437bbee5f78bee58e7363b848f" exitCode=0 Feb 04 10:02:41 crc kubenswrapper[4644]: I0204 10:02:41.950136 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerDied","Data":"b8398cefc6ffaa7207a130e9989dbc2f9ffcaf437bbee5f78bee58e7363b848f"} Feb 04 10:02:42 crc kubenswrapper[4644]: I0204 10:02:42.961268 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerStarted","Data":"a5ba839b133b894bdd17bbab538d44139c9edf0bece38493b4ad07c36ba029cb"} Feb 04 10:02:42 crc kubenswrapper[4644]: I0204 10:02:42.986482 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhm65" podStartSLOduration=3.2621736 podStartE2EDuration="7.986459016s" podCreationTimestamp="2026-02-04 10:02:35 +0000 UTC" firstStartedPulling="2026-02-04 10:02:37.879427227 +0000 UTC m=+4867.919484982" lastFinishedPulling="2026-02-04 10:02:42.603712643 +0000 UTC m=+4872.643770398" observedRunningTime="2026-02-04 10:02:42.978066308 +0000 UTC m=+4873.018124083" watchObservedRunningTime="2026-02-04 10:02:42.986459016 +0000 UTC m=+4873.026516771" Feb 04 10:02:45 crc kubenswrapper[4644]: I0204 10:02:45.883496 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:45 crc kubenswrapper[4644]: I0204 10:02:45.883795 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:46 crc kubenswrapper[4644]: I0204 10:02:46.032115 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:55 crc kubenswrapper[4644]: I0204 10:02:55.938383 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:55 crc kubenswrapper[4644]: I0204 10:02:55.999362 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:56 crc kubenswrapper[4644]: I0204 10:02:56.071669 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhm65" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="registry-server" containerID="cri-o://a5ba839b133b894bdd17bbab538d44139c9edf0bece38493b4ad07c36ba029cb" gracePeriod=2 Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.085690 4644 generic.go:334] "Generic (PLEG): container finished" podID="af5539e5-757c-42e7-873d-028ade6b2185" containerID="a5ba839b133b894bdd17bbab538d44139c9edf0bece38493b4ad07c36ba029cb" exitCode=0 Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.085790 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerDied","Data":"a5ba839b133b894bdd17bbab538d44139c9edf0bece38493b4ad07c36ba029cb"} Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.765315 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.786133 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities\") pod \"af5539e5-757c-42e7-873d-028ade6b2185\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.786318 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content\") pod \"af5539e5-757c-42e7-873d-028ade6b2185\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.786376 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vsg6\" (UniqueName: \"kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6\") pod \"af5539e5-757c-42e7-873d-028ade6b2185\" (UID: \"af5539e5-757c-42e7-873d-028ade6b2185\") " Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.833378 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6" (OuterVolumeSpecName: "kube-api-access-4vsg6") pod "af5539e5-757c-42e7-873d-028ade6b2185" (UID: "af5539e5-757c-42e7-873d-028ade6b2185"). InnerVolumeSpecName "kube-api-access-4vsg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.834416 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities" (OuterVolumeSpecName: "utilities") pod "af5539e5-757c-42e7-873d-028ade6b2185" (UID: "af5539e5-757c-42e7-873d-028ade6b2185"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.837570 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af5539e5-757c-42e7-873d-028ade6b2185" (UID: "af5539e5-757c-42e7-873d-028ade6b2185"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.889474 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.889595 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vsg6\" (UniqueName: \"kubernetes.io/projected/af5539e5-757c-42e7-873d-028ade6b2185-kube-api-access-4vsg6\") on node \"crc\" DevicePath \"\"" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.889691 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af5539e5-757c-42e7-873d-028ade6b2185-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 10:02:57 crc kubenswrapper[4644]: I0204 10:02:57.992561 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-fhr46_86635827-026c-4145-9130-3c300da69963/manager/0.log" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.097602 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhm65" event={"ID":"af5539e5-757c-42e7-873d-028ade6b2185","Type":"ContainerDied","Data":"ba2a2fcf12938532180da366a808278f957c4e50331794c611cfde8224f57d72"} Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.097685 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhm65" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.098676 4644 scope.go:117] "RemoveContainer" containerID="a5ba839b133b894bdd17bbab538d44139c9edf0bece38493b4ad07c36ba029cb" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.117911 4644 scope.go:117] "RemoveContainer" containerID="b8398cefc6ffaa7207a130e9989dbc2f9ffcaf437bbee5f78bee58e7363b848f" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.136996 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.173379 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhm65"] Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.176758 4644 scope.go:117] "RemoveContainer" containerID="5c0164a6b2873554b080ed7b763e3d470078270fa1a94ff8fe109586d66ee527" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.291017 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-sxbgc_65e46d7b-9b3f-447b-91da-35322d406623/manager/0.log" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.325335 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hwkc4_3bb04651-3f3e-4f0a-8822-11279a338e20/manager/0.log" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.593260 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 10:02:58 crc kubenswrapper[4644]: I0204 10:02:58.672881 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5539e5-757c-42e7-873d-028ade6b2185" path="/var/lib/kubelet/pods/af5539e5-757c-42e7-873d-028ade6b2185/volumes" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.300013 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.354530 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.385113 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.607026 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/util/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.619141 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/extract/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.636715 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f1546a2d92dbc7c86b33d2712454456d72af9d9ad8913f001e107032bbdm6xp_92a3c41f-925d-4ff3-a3ff-77f9c35216fb/pull/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.901426 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-stnhl_362644b0-399b-4476-b8f7-9723011b9053/manager/0.log" Feb 04 10:02:59 crc kubenswrapper[4644]: I0204 10:02:59.960854 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-cln6d_b449c147-de4b-4503-b680-86e2a43715e2/manager/0.log" Feb 04 10:03:00 crc kubenswrapper[4644]: I0204 10:03:00.165630 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-g9w8f_718025b3-0dfa-4c50-a020-8fc030f6061c/manager/0.log" Feb 04 10:03:00 crc kubenswrapper[4644]: I0204 10:03:00.488074 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6ldzh_af50abdc-12fd-4e29-b6ce-804f91e185f5/manager/0.log" Feb 04 10:03:00 crc kubenswrapper[4644]: I0204 10:03:00.525533 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-pb5zg_b3816529-aae3-447c-b497-027d78669856/manager/0.log" Feb 04 10:03:00 crc kubenswrapper[4644]: I0204 10:03:00.752539 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xmsgv_e9033b55-edfc-440d-bd2c-fa027d27f034/manager/0.log" Feb 04 10:03:00 crc kubenswrapper[4644]: I0204 10:03:00.860004 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-t5sv7_f1aab4ac-082c-4c69-94c8-6291514178b7/manager/0.log" Feb 04 10:03:01 crc kubenswrapper[4644]: I0204 10:03:01.067974 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-xw5rw_08ce9496-06f2-4a40-aac7-eaddbc4eb617/manager/0.log" Feb 04 10:03:01 crc kubenswrapper[4644]: I0204 10:03:01.108297 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-6mv9v_1126de8e-d0ae-4d0d-a7d3-cad73f6cc672/manager/0.log" Feb 04 10:03:01 crc kubenswrapper[4644]: I0204 10:03:01.440526 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v6q27_6f482e24-1f12-48bd-8944-93b1e7ee2d76/manager/0.log" Feb 04 10:03:01 crc kubenswrapper[4644]: I0204 10:03:01.520501 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9n6pj_0d5154cd-bccf-4112-a9b5-df0cf8375905/manager/0.log" Feb 04 10:03:01 crc kubenswrapper[4644]: I0204 10:03:01.728870 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d777fx_d92e25ae-9963-4073-9b4e-66f4aafff7a6/manager/0.log" Feb 04 10:03:02 crc kubenswrapper[4644]: I0204 10:03:02.023676 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7779fb4444-rsl7v_9e804a34-fb91-4608-84f0-08283597694b/operator/0.log" Feb 04 10:03:02 crc kubenswrapper[4644]: I0204 10:03:02.289135 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-85gvc_fad001d0-1475-450d-97d9-714d13e42d37/registry-server/0.log" Feb 04 10:03:02 crc kubenswrapper[4644]: I0204 10:03:02.771538 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-4r2z6_e6482c44-8c91-4931-aceb-b18c7418a6c4/manager/0.log" Feb 04 10:03:02 crc kubenswrapper[4644]: I0204 10:03:02.831500 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-hp2fd_dca5895b-8bfa-4060-a60d-79e37d0eefe6/manager/0.log" Feb 04 10:03:03 crc kubenswrapper[4644]: I0204 10:03:03.062721 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vgccb_9e6331c7-8b94-4ded-92d0-e9db7bbd45ec/operator/0.log" Feb 04 10:03:03 crc kubenswrapper[4644]: I0204 10:03:03.200983 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-7jlm9_b74f9275-a7ff-4b5f-a6e1-3adff65c8a71/manager/0.log" Feb 04 10:03:03 crc kubenswrapper[4644]: I0204 10:03:03.626868 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-9msfm_8b00283c-6f66-489b-b929-bbd1a5706b67/manager/0.log" Feb 04 10:03:03 crc kubenswrapper[4644]: I0204 10:03:03.899126 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tc45m_bb1b03f9-1c9b-4cfb-9503-4c28a27f2d96/manager/0.log" Feb 04 10:03:04 crc kubenswrapper[4644]: I0204 10:03:04.095809 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-8l8s8_277bd37d-6c35-4b57-b7bd-b6bb3f1043fe/manager/0.log" Feb 04 10:03:04 crc kubenswrapper[4644]: I0204 10:03:04.404128 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69b675f8c4-g2gnp_ddb47eef-c05a-40c3-8d94-dd9187b61267/manager/0.log" Feb 04 10:03:05 crc kubenswrapper[4644]: I0204 10:03:05.555081 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:03:05 crc kubenswrapper[4644]: I0204 10:03:05.555355 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:03:05 crc kubenswrapper[4644]: I0204 10:03:05.555397 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 10:03:05 crc kubenswrapper[4644]: I0204 10:03:05.556105 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 10:03:05 crc kubenswrapper[4644]: I0204 10:03:05.556154 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab" gracePeriod=600 Feb 04 10:03:06 crc kubenswrapper[4644]: I0204 10:03:06.177299 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab" exitCode=0 Feb 04 10:03:06 crc kubenswrapper[4644]: I0204 10:03:06.177379 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab"} Feb 04 10:03:06 crc kubenswrapper[4644]: I0204 10:03:06.177979 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262"} Feb 04 10:03:06 crc kubenswrapper[4644]: I0204 10:03:06.178033 4644 scope.go:117] "RemoveContainer" containerID="7d0c6ce82ce11d2d07c2eab3dccdc984e419c6967d201661676f1ffd42439e39" Feb 04 10:03:26 crc kubenswrapper[4644]: I0204 10:03:26.610501 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-htm2g_f846627e-2b5c-4fed-8898-e734c9dbce9b/control-plane-machine-set-operator/0.log" Feb 04 10:03:26 crc kubenswrapper[4644]: I0204 10:03:26.823270 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vw5x9_c09e24ca-d42d-4f59-9a19-83410a062bb1/machine-api-operator/0.log" Feb 04 10:03:26 crc kubenswrapper[4644]: I0204 10:03:26.856953 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vw5x9_c09e24ca-d42d-4f59-9a19-83410a062bb1/kube-rbac-proxy/0.log" Feb 04 10:03:40 crc kubenswrapper[4644]: I0204 10:03:40.375801 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mpbm5_ea2632db-c8cd-42a9-8f74-d989cf9f77a2/cert-manager-controller/0.log" Feb 04 10:03:40 crc kubenswrapper[4644]: I0204 10:03:40.561908 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-nk2qt_ce66b184-f3af-4f9c-b86d-138993d4114b/cert-manager-webhook/0.log" Feb 04 10:03:40 crc kubenswrapper[4644]: I0204 10:03:40.563024 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-27drg_cac9d42c-34be-410d-aca7-2346943b13c6/cert-manager-cainjector/0.log" Feb 04 10:03:53 crc kubenswrapper[4644]: I0204 10:03:53.928075 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6f874f9768-mhn4n_19ba6f84-da44-468a-bf88-2d5861308d59/nmstate-console-plugin/0.log" Feb 04 10:03:54 crc kubenswrapper[4644]: I0204 10:03:54.179758 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q22qq_736f2cd3-420f-4c26-91ad-acd900c9fa01/nmstate-handler/0.log" Feb 04 10:03:54 crc kubenswrapper[4644]: I0204 10:03:54.180364 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-q44mg_bf7f3412-56f2-4b59-bd63-86f748e1d27f/kube-rbac-proxy/0.log" Feb 04 10:03:54 crc kubenswrapper[4644]: I0204 10:03:54.255577 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-q44mg_bf7f3412-56f2-4b59-bd63-86f748e1d27f/nmstate-metrics/0.log" Feb 04 10:03:54 crc kubenswrapper[4644]: I0204 10:03:54.373130 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-57bf49857b-w2rnn_292e6d27-c5ff-4352-a25e-a8b40030e9e2/nmstate-operator/0.log" Feb 04 10:03:54 crc kubenswrapper[4644]: I0204 10:03:54.466310 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-bd5678b45-bzsxp_2e070277-6ff5-41d0-ade7-81a146232b83/nmstate-webhook/0.log" Feb 04 10:04:04 crc kubenswrapper[4644]: I0204 10:04:04.989361 4644 scope.go:117] "RemoveContainer" containerID="4e0ac2aa1f6cf4205314ac1a233853fa57fa14cfa2ff985b4cc6c98caf2c39eb" Feb 04 10:04:05 crc kubenswrapper[4644]: I0204 10:04:05.012869 4644 scope.go:117] "RemoveContainer" containerID="4fff5f3a068a9537a308556b251167ce59f9fd5dcd3e29a7729930134d5b972e" Feb 04 10:04:05 crc kubenswrapper[4644]: I0204 10:04:05.056385 4644 scope.go:117] "RemoveContainer" containerID="74b797bc6b065d609c17e45ca654d9be7a3302e69f4cac951c985c13ce9475d8" Feb 04 10:04:22 crc kubenswrapper[4644]: I0204 10:04:22.662021 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-z7zmw_ef11c1e1-54cf-4428-9a73-9a8eb183dde6/kube-rbac-proxy/0.log" Feb 04 10:04:22 crc kubenswrapper[4644]: I0204 10:04:22.707406 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-z7zmw_ef11c1e1-54cf-4428-9a73-9a8eb183dde6/controller/0.log" Feb 04 10:04:22 crc kubenswrapper[4644]: I0204 10:04:22.891654 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.067447 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.075910 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.101117 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.130209 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.271125 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:23 crc kubenswrapper[4644]: E0204 10:04:23.271890 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="registry-server" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.271987 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="registry-server" Feb 04 10:04:23 crc kubenswrapper[4644]: E0204 10:04:23.272070 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="extract-content" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.272141 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="extract-content" Feb 04 10:04:23 crc kubenswrapper[4644]: E0204 10:04:23.272224 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="extract-utilities" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.272290 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="extract-utilities" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.272628 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5539e5-757c-42e7-873d-028ade6b2185" containerName="registry-server" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.274401 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.282180 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.435459 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfsn\" (UniqueName: \"kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.436153 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.436341 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.537714 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.537792 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.537864 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfsn\" (UniqueName: \"kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.538538 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.542549 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.569019 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfsn\" (UniqueName: \"kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn\") pod \"community-operators-tlqkb\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.634995 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.795632 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.915009 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 10:04:23 crc kubenswrapper[4644]: I0204 10:04:23.975496 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.190336 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.250249 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.409526 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-frr-files/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.476572 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-metrics/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.482206 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/cp-reloader/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.500640 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/controller/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.752713 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/kube-rbac-proxy-frr/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.777126 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/kube-rbac-proxy/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.795450 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/frr-metrics/0.log" Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.891162 4644 generic.go:334] "Generic (PLEG): container finished" podID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerID="82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e" exitCode=0 Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.891209 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerDied","Data":"82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e"} Feb 04 10:04:24 crc kubenswrapper[4644]: I0204 10:04:24.891239 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerStarted","Data":"fd4e47119d7d8cace6132560e3aaf9d222ca903c0406d2a799c8fe8f68a7541e"} Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.104789 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/reloader/0.log" Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.175007 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-97dfd4f9f-jcnsg_fd959e6b-00cf-4818-8b5a-0ad09c060e5e/frr-k8s-webhook-server/0.log" Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.547803 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-668579b8df-dc2hb_880260a9-a2e8-463c-97ba-3b936f884d9d/manager/0.log" Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.728874 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b86757d9b-m6f8p_e5e99bd5-408c-4369-bd40-b31bb61ffc43/webhook-server/0.log" Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.842960 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6bp8j_108d8162-12e1-4dfa-ab06-a416b6880150/frr/0.log" Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.907965 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerStarted","Data":"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5"} Feb 04 10:04:25 crc kubenswrapper[4644]: I0204 10:04:25.945062 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-twwks_4496f888-8e49-4a88-b753-7f2d55dc317a/kube-rbac-proxy/0.log" Feb 04 10:04:26 crc kubenswrapper[4644]: I0204 10:04:26.371482 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-twwks_4496f888-8e49-4a88-b753-7f2d55dc317a/speaker/0.log" Feb 04 10:04:27 crc kubenswrapper[4644]: I0204 10:04:27.932838 4644 generic.go:334] "Generic (PLEG): container finished" podID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerID="4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5" exitCode=0 Feb 04 10:04:27 crc kubenswrapper[4644]: I0204 10:04:27.932913 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerDied","Data":"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5"} Feb 04 10:04:29 crc kubenswrapper[4644]: I0204 10:04:29.952765 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerStarted","Data":"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a"} Feb 04 10:04:29 crc kubenswrapper[4644]: I0204 10:04:29.974767 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlqkb" podStartSLOduration=3.360439569 podStartE2EDuration="6.97475125s" podCreationTimestamp="2026-02-04 10:04:23 +0000 UTC" firstStartedPulling="2026-02-04 10:04:24.901518964 +0000 UTC m=+4974.941576719" lastFinishedPulling="2026-02-04 10:04:28.515830645 +0000 UTC m=+4978.555888400" observedRunningTime="2026-02-04 10:04:29.966977778 +0000 UTC m=+4980.007035533" watchObservedRunningTime="2026-02-04 10:04:29.97475125 +0000 UTC m=+4980.014809005" Feb 04 10:04:33 crc kubenswrapper[4644]: I0204 10:04:33.635149 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:33 crc kubenswrapper[4644]: I0204 10:04:33.635729 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:33 crc kubenswrapper[4644]: I0204 10:04:33.687047 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.402861 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.604535 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.651756 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.721215 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.892523 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/pull/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.919629 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/extract/0.log" Feb 04 10:04:41 crc kubenswrapper[4644]: I0204 10:04:41.956030 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb87rjnfd_c161aa28-3e38-4cd4-9b44-29cffcdf6c81/util/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.098613 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.305227 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.306933 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.333397 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.496992 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/pull/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.527361 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/util/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.551488 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58ncmjd_4269a3f6-3fd6-4c8c-8dbd-08681fd28e39/extract/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.691174 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.917287 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.943363 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 10:04:42 crc kubenswrapper[4644]: I0204 10:04:42.966414 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.365371 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-content/0.log" Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.376164 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/extract-utilities/0.log" Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.629443 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.705077 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.783254 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:43 crc kubenswrapper[4644]: I0204 10:04:43.890044 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t7k59_3b67abb6-99ac-49bb-8ec2-5a445e1c18e6/registry-server/0.log" Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.027253 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.063203 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlqkb" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="registry-server" containerID="cri-o://238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a" gracePeriod=2 Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.113077 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.116430 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.390717 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-content/0.log" Feb 04 10:04:44 crc kubenswrapper[4644]: I0204 10:04:44.403430 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/extract-utilities/0.log" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.076628 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.091614 4644 generic.go:334] "Generic (PLEG): container finished" podID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerID="238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a" exitCode=0 Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.091659 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerDied","Data":"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a"} Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.091688 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqkb" event={"ID":"090c31c8-2353-4cf9-8528-c9c55bc9aab5","Type":"ContainerDied","Data":"fd4e47119d7d8cace6132560e3aaf9d222ca903c0406d2a799c8fe8f68a7541e"} Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.091710 4644 scope.go:117] "RemoveContainer" containerID="238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.123618 4644 scope.go:117] "RemoveContainer" containerID="4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.169691 4644 scope.go:117] "RemoveContainer" containerID="82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.172098 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfsn\" (UniqueName: \"kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn\") pod \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.172301 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content\") pod \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.172426 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities\") pod \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\" (UID: \"090c31c8-2353-4cf9-8528-c9c55bc9aab5\") " Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.179642 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities" (OuterVolumeSpecName: "utilities") pod "090c31c8-2353-4cf9-8528-c9c55bc9aab5" (UID: "090c31c8-2353-4cf9-8528-c9c55bc9aab5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.186897 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn" (OuterVolumeSpecName: "kube-api-access-qzfsn") pod "090c31c8-2353-4cf9-8528-c9c55bc9aab5" (UID: "090c31c8-2353-4cf9-8528-c9c55bc9aab5"). InnerVolumeSpecName "kube-api-access-qzfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.242874 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "090c31c8-2353-4cf9-8528-c9c55bc9aab5" (UID: "090c31c8-2353-4cf9-8528-c9c55bc9aab5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.257272 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9q6zx_acca16e8-193f-4e4c-acee-8f1067e6260f/registry-server/0.log" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.275198 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.275227 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c31c8-2353-4cf9-8528-c9c55bc9aab5-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.275238 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfsn\" (UniqueName: \"kubernetes.io/projected/090c31c8-2353-4cf9-8528-c9c55bc9aab5-kube-api-access-qzfsn\") on node \"crc\" DevicePath \"\"" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.292818 4644 scope.go:117] "RemoveContainer" containerID="238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a" Feb 04 10:04:45 crc kubenswrapper[4644]: E0204 10:04:45.293349 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a\": container with ID starting with 238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a not found: ID does not exist" containerID="238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.293397 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a"} err="failed to get container status \"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a\": rpc error: code = NotFound desc = could not find container \"238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a\": container with ID starting with 238b2a2215fe2c7c86a43e3deb68c684035397f58cef645d89d44cef3b9f283a not found: ID does not exist" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.293431 4644 scope.go:117] "RemoveContainer" containerID="4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5" Feb 04 10:04:45 crc kubenswrapper[4644]: E0204 10:04:45.293709 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5\": container with ID starting with 4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5 not found: ID does not exist" containerID="4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.293750 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5"} err="failed to get container status \"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5\": rpc error: code = NotFound desc = could not find container \"4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5\": container with ID starting with 4d601af02000b095891e7e64af7d5af17e2188393b2aca0226b3b42f5580d3d5 not found: ID does not exist" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.293768 4644 scope.go:117] "RemoveContainer" containerID="82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e" Feb 04 10:04:45 crc kubenswrapper[4644]: E0204 10:04:45.294091 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e\": container with ID starting with 82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e not found: ID does not exist" containerID="82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.294129 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e"} err="failed to get container status \"82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e\": rpc error: code = NotFound desc = could not find container \"82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e\": container with ID starting with 82211e1b61b0972461d86f49a22f3461bee37f8cc7866f15b3c316689562a70e not found: ID does not exist" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.987464 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/1.log" Feb 04 10:04:45 crc kubenswrapper[4644]: I0204 10:04:45.989481 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhnlp_42a112d4-2c64-4c7f-a895-a85e29b12d8a/marketplace-operator/2.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.100467 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqkb" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.132389 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.133899 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.147901 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlqkb"] Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.563073 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.569161 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.616034 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.673264 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" path="/var/lib/kubelet/pods/090c31c8-2353-4cf9-8528-c9c55bc9aab5/volumes" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.861563 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.863789 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-utilities/0.log" Feb 04 10:04:46 crc kubenswrapper[4644]: I0204 10:04:46.865533 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/extract-content/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.028596 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nts4j_0ddce8ec-a47d-4efc-b273-56ec3223320d/registry-server/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.177769 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.184993 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.191736 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.447211 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-utilities/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.503155 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/extract-content/0.log" Feb 04 10:04:47 crc kubenswrapper[4644]: I0204 10:04:47.951208 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9k5x_5b38357a-6348-4d8c-b09d-a06cbdd14739/registry-server/0.log" Feb 04 10:05:05 crc kubenswrapper[4644]: I0204 10:05:05.554937 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:05:05 crc kubenswrapper[4644]: I0204 10:05:05.555548 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:05:35 crc kubenswrapper[4644]: I0204 10:05:35.555001 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:05:35 crc kubenswrapper[4644]: I0204 10:05:35.556244 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:06:05 crc kubenswrapper[4644]: I0204 10:06:05.555853 4644 patch_prober.go:28] interesting pod/machine-config-daemon-qwrck container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 10:06:05 crc kubenswrapper[4644]: I0204 10:06:05.556468 4644 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 10:06:05 crc kubenswrapper[4644]: I0204 10:06:05.556530 4644 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" Feb 04 10:06:05 crc kubenswrapper[4644]: I0204 10:06:05.557592 4644 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262"} pod="openshift-machine-config-operator/machine-config-daemon-qwrck" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 10:06:05 crc kubenswrapper[4644]: I0204 10:06:05.557719 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerName="machine-config-daemon" containerID="cri-o://460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" gracePeriod=600 Feb 04 10:06:05 crc kubenswrapper[4644]: E0204 10:06:05.681640 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:06:06 crc kubenswrapper[4644]: I0204 10:06:06.174339 4644 generic.go:334] "Generic (PLEG): container finished" podID="c2a87f38-c8a0-4007-b926-1dafb84e7483" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" exitCode=0 Feb 04 10:06:06 crc kubenswrapper[4644]: I0204 10:06:06.174376 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerDied","Data":"460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262"} Feb 04 10:06:06 crc kubenswrapper[4644]: I0204 10:06:06.174657 4644 scope.go:117] "RemoveContainer" containerID="ac59935f6b13f840dd407828b18da4fe204914aaca0883c725a32183217871ab" Feb 04 10:06:06 crc kubenswrapper[4644]: I0204 10:06:06.175256 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:06:06 crc kubenswrapper[4644]: E0204 10:06:06.175556 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:06:17 crc kubenswrapper[4644]: I0204 10:06:17.660434 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:06:17 crc kubenswrapper[4644]: E0204 10:06:17.661444 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:06:28 crc kubenswrapper[4644]: I0204 10:06:28.660880 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:06:28 crc kubenswrapper[4644]: E0204 10:06:28.661741 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:06:43 crc kubenswrapper[4644]: I0204 10:06:43.660514 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:06:43 crc kubenswrapper[4644]: E0204 10:06:43.661686 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:06:55 crc kubenswrapper[4644]: I0204 10:06:55.662286 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:06:55 crc kubenswrapper[4644]: E0204 10:06:55.663240 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:07:05 crc kubenswrapper[4644]: I0204 10:07:05.182825 4644 scope.go:117] "RemoveContainer" containerID="84f34947e1f107521c9fccd94f1dd8cd61e818fd09c4bbb21096e8dee4c22368" Feb 04 10:07:06 crc kubenswrapper[4644]: I0204 10:07:06.660597 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:07:06 crc kubenswrapper[4644]: E0204 10:07:06.661126 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:07:16 crc kubenswrapper[4644]: I0204 10:07:16.167948 4644 generic.go:334] "Generic (PLEG): container finished" podID="a256db3c-8355-4426-a515-28f0f8d3017c" containerID="fe9dcedf62a748973827bc6019626c2acd9af8f71e1a561393664300aa95d0a5" exitCode=0 Feb 04 10:07:16 crc kubenswrapper[4644]: I0204 10:07:16.168017 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8jgb/must-gather-99h5c" event={"ID":"a256db3c-8355-4426-a515-28f0f8d3017c","Type":"ContainerDied","Data":"fe9dcedf62a748973827bc6019626c2acd9af8f71e1a561393664300aa95d0a5"} Feb 04 10:07:16 crc kubenswrapper[4644]: I0204 10:07:16.169190 4644 scope.go:117] "RemoveContainer" containerID="fe9dcedf62a748973827bc6019626c2acd9af8f71e1a561393664300aa95d0a5" Feb 04 10:07:16 crc kubenswrapper[4644]: I0204 10:07:16.251250 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8jgb_must-gather-99h5c_a256db3c-8355-4426-a515-28f0f8d3017c/gather/0.log" Feb 04 10:07:20 crc kubenswrapper[4644]: I0204 10:07:20.668533 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:07:20 crc kubenswrapper[4644]: E0204 10:07:20.669500 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.036514 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:29 crc kubenswrapper[4644]: E0204 10:07:29.037859 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="extract-utilities" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.037876 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="extract-utilities" Feb 04 10:07:29 crc kubenswrapper[4644]: E0204 10:07:29.037912 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="registry-server" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.037920 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="registry-server" Feb 04 10:07:29 crc kubenswrapper[4644]: E0204 10:07:29.037946 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="extract-content" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.037955 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="extract-content" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.038163 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="090c31c8-2353-4cf9-8528-c9c55bc9aab5" containerName="registry-server" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.039484 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.069947 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.129289 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.129393 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.129695 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj8k\" (UniqueName: \"kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.231472 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.231536 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.231614 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdj8k\" (UniqueName: \"kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.232354 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.232561 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.257423 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdj8k\" (UniqueName: \"kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k\") pod \"redhat-operators-zrgd5\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:29 crc kubenswrapper[4644]: I0204 10:07:29.359793 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:30 crc kubenswrapper[4644]: I0204 10:07:30.017240 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:30 crc kubenswrapper[4644]: I0204 10:07:30.290915 4644 generic.go:334] "Generic (PLEG): container finished" podID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerID="26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa" exitCode=0 Feb 04 10:07:30 crc kubenswrapper[4644]: I0204 10:07:30.291054 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerDied","Data":"26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa"} Feb 04 10:07:30 crc kubenswrapper[4644]: I0204 10:07:30.291245 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerStarted","Data":"7578ec6b911aa8df5017c6d55883fb9acd5485d77d60165005497abfe7badb5e"} Feb 04 10:07:30 crc kubenswrapper[4644]: I0204 10:07:30.297167 4644 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 10:07:31 crc kubenswrapper[4644]: I0204 10:07:31.814457 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8jgb/must-gather-99h5c"] Feb 04 10:07:31 crc kubenswrapper[4644]: I0204 10:07:31.814964 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x8jgb/must-gather-99h5c" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="copy" containerID="cri-o://464d67585ad70769402984c9e0ce13f1fc38768b1532dd10ac45aebcfaa43290" gracePeriod=2 Feb 04 10:07:31 crc kubenswrapper[4644]: I0204 10:07:31.828019 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8jgb/must-gather-99h5c"] Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.320991 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8jgb_must-gather-99h5c_a256db3c-8355-4426-a515-28f0f8d3017c/copy/0.log" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.321528 4644 generic.go:334] "Generic (PLEG): container finished" podID="a256db3c-8355-4426-a515-28f0f8d3017c" containerID="464d67585ad70769402984c9e0ce13f1fc38768b1532dd10ac45aebcfaa43290" exitCode=143 Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.321573 4644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a3742f2716ac20b2c2acec5048e73040235fa8f0a88d100526f70fea753196" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.354705 4644 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8jgb_must-gather-99h5c_a256db3c-8355-4426-a515-28f0f8d3017c/copy/0.log" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.355466 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.408707 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output\") pod \"a256db3c-8355-4426-a515-28f0f8d3017c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.408913 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56tp\" (UniqueName: \"kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp\") pod \"a256db3c-8355-4426-a515-28f0f8d3017c\" (UID: \"a256db3c-8355-4426-a515-28f0f8d3017c\") " Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.419779 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp" (OuterVolumeSpecName: "kube-api-access-g56tp") pod "a256db3c-8355-4426-a515-28f0f8d3017c" (UID: "a256db3c-8355-4426-a515-28f0f8d3017c"). InnerVolumeSpecName "kube-api-access-g56tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.510858 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56tp\" (UniqueName: \"kubernetes.io/projected/a256db3c-8355-4426-a515-28f0f8d3017c-kube-api-access-g56tp\") on node \"crc\" DevicePath \"\"" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.613224 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a256db3c-8355-4426-a515-28f0f8d3017c" (UID: "a256db3c-8355-4426-a515-28f0f8d3017c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.674618 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" path="/var/lib/kubelet/pods/a256db3c-8355-4426-a515-28f0f8d3017c/volumes" Feb 04 10:07:32 crc kubenswrapper[4644]: I0204 10:07:32.715518 4644 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a256db3c-8355-4426-a515-28f0f8d3017c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 04 10:07:33 crc kubenswrapper[4644]: I0204 10:07:33.332361 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8jgb/must-gather-99h5c" Feb 04 10:07:33 crc kubenswrapper[4644]: I0204 10:07:33.332422 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerStarted","Data":"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8"} Feb 04 10:07:35 crc kubenswrapper[4644]: I0204 10:07:35.659581 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:07:35 crc kubenswrapper[4644]: E0204 10:07:35.660125 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:07:37 crc kubenswrapper[4644]: I0204 10:07:37.369489 4644 generic.go:334] "Generic (PLEG): container finished" podID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerID="4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8" exitCode=0 Feb 04 10:07:37 crc kubenswrapper[4644]: I0204 10:07:37.369606 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerDied","Data":"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8"} Feb 04 10:07:39 crc kubenswrapper[4644]: I0204 10:07:39.388766 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerStarted","Data":"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45"} Feb 04 10:07:39 crc kubenswrapper[4644]: I0204 10:07:39.421035 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrgd5" podStartSLOduration=2.852862997 podStartE2EDuration="10.421014479s" podCreationTimestamp="2026-02-04 10:07:29 +0000 UTC" firstStartedPulling="2026-02-04 10:07:30.296868935 +0000 UTC m=+5160.336926690" lastFinishedPulling="2026-02-04 10:07:37.865020427 +0000 UTC m=+5167.905078172" observedRunningTime="2026-02-04 10:07:39.413225316 +0000 UTC m=+5169.453283071" watchObservedRunningTime="2026-02-04 10:07:39.421014479 +0000 UTC m=+5169.461072234" Feb 04 10:07:49 crc kubenswrapper[4644]: I0204 10:07:49.359935 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:49 crc kubenswrapper[4644]: I0204 10:07:49.360777 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:49 crc kubenswrapper[4644]: I0204 10:07:49.431118 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:49 crc kubenswrapper[4644]: I0204 10:07:49.518048 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:49 crc kubenswrapper[4644]: I0204 10:07:49.677472 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:50 crc kubenswrapper[4644]: I0204 10:07:50.667122 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:07:50 crc kubenswrapper[4644]: E0204 10:07:50.667447 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:07:51 crc kubenswrapper[4644]: I0204 10:07:51.490048 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrgd5" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="registry-server" containerID="cri-o://98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45" gracePeriod=2 Feb 04 10:07:51 crc kubenswrapper[4644]: I0204 10:07:51.966682 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.099143 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content\") pod \"d382e62f-fe7d-4185-aa65-039f4f652e17\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.099291 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdj8k\" (UniqueName: \"kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k\") pod \"d382e62f-fe7d-4185-aa65-039f4f652e17\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.099389 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities\") pod \"d382e62f-fe7d-4185-aa65-039f4f652e17\" (UID: \"d382e62f-fe7d-4185-aa65-039f4f652e17\") " Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.100436 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities" (OuterVolumeSpecName: "utilities") pod "d382e62f-fe7d-4185-aa65-039f4f652e17" (UID: "d382e62f-fe7d-4185-aa65-039f4f652e17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.105843 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k" (OuterVolumeSpecName: "kube-api-access-qdj8k") pod "d382e62f-fe7d-4185-aa65-039f4f652e17" (UID: "d382e62f-fe7d-4185-aa65-039f4f652e17"). InnerVolumeSpecName "kube-api-access-qdj8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.202226 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.202367 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdj8k\" (UniqueName: \"kubernetes.io/projected/d382e62f-fe7d-4185-aa65-039f4f652e17-kube-api-access-qdj8k\") on node \"crc\" DevicePath \"\"" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.243009 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d382e62f-fe7d-4185-aa65-039f4f652e17" (UID: "d382e62f-fe7d-4185-aa65-039f4f652e17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.303890 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d382e62f-fe7d-4185-aa65-039f4f652e17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.500012 4644 generic.go:334] "Generic (PLEG): container finished" podID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerID="98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45" exitCode=0 Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.500064 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerDied","Data":"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45"} Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.500077 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrgd5" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.500093 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrgd5" event={"ID":"d382e62f-fe7d-4185-aa65-039f4f652e17","Type":"ContainerDied","Data":"7578ec6b911aa8df5017c6d55883fb9acd5485d77d60165005497abfe7badb5e"} Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.500114 4644 scope.go:117] "RemoveContainer" containerID="98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.518590 4644 scope.go:117] "RemoveContainer" containerID="4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.539503 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.548489 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrgd5"] Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.565854 4644 scope.go:117] "RemoveContainer" containerID="26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.590470 4644 scope.go:117] "RemoveContainer" containerID="98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45" Feb 04 10:07:52 crc kubenswrapper[4644]: E0204 10:07:52.590903 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45\": container with ID starting with 98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45 not found: ID does not exist" containerID="98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.590944 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45"} err="failed to get container status \"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45\": rpc error: code = NotFound desc = could not find container \"98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45\": container with ID starting with 98b43a10d6e14590d6b054d07d01b5655bfc7a93054a0fb560691da338f62e45 not found: ID does not exist" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.590967 4644 scope.go:117] "RemoveContainer" containerID="4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8" Feb 04 10:07:52 crc kubenswrapper[4644]: E0204 10:07:52.591501 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8\": container with ID starting with 4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8 not found: ID does not exist" containerID="4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.591560 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8"} err="failed to get container status \"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8\": rpc error: code = NotFound desc = could not find container \"4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8\": container with ID starting with 4bd91ccd0aba8eb931eecc374bff57d24ac99d029e4d0362c2b02fc2df1bcad8 not found: ID does not exist" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.591597 4644 scope.go:117] "RemoveContainer" containerID="26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa" Feb 04 10:07:52 crc kubenswrapper[4644]: E0204 10:07:52.591946 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa\": container with ID starting with 26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa not found: ID does not exist" containerID="26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.591981 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa"} err="failed to get container status \"26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa\": rpc error: code = NotFound desc = could not find container \"26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa\": container with ID starting with 26df1d5246b930b67d95714dd929a0e4f67da866e1bdb043887b08d7f1ae3afa not found: ID does not exist" Feb 04 10:07:52 crc kubenswrapper[4644]: I0204 10:07:52.674544 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" path="/var/lib/kubelet/pods/d382e62f-fe7d-4185-aa65-039f4f652e17/volumes" Feb 04 10:08:04 crc kubenswrapper[4644]: I0204 10:08:04.660248 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:08:04 crc kubenswrapper[4644]: E0204 10:08:04.661530 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:08:05 crc kubenswrapper[4644]: I0204 10:08:05.238900 4644 scope.go:117] "RemoveContainer" containerID="805b85768182bcd9f3932a3e7fe3a0b6150abde91738f0e30a284621c8d52851" Feb 04 10:08:05 crc kubenswrapper[4644]: I0204 10:08:05.261100 4644 scope.go:117] "RemoveContainer" containerID="fe9dcedf62a748973827bc6019626c2acd9af8f71e1a561393664300aa95d0a5" Feb 04 10:08:05 crc kubenswrapper[4644]: I0204 10:08:05.327278 4644 scope.go:117] "RemoveContainer" containerID="464d67585ad70769402984c9e0ce13f1fc38768b1532dd10ac45aebcfaa43290" Feb 04 10:08:19 crc kubenswrapper[4644]: I0204 10:08:19.660305 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:08:19 crc kubenswrapper[4644]: E0204 10:08:19.661103 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:08:30 crc kubenswrapper[4644]: I0204 10:08:30.672268 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:08:30 crc kubenswrapper[4644]: E0204 10:08:30.673358 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:08:43 crc kubenswrapper[4644]: I0204 10:08:43.659705 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:08:43 crc kubenswrapper[4644]: E0204 10:08:43.660517 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:08:56 crc kubenswrapper[4644]: I0204 10:08:56.659826 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:08:56 crc kubenswrapper[4644]: E0204 10:08:56.661541 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:09:11 crc kubenswrapper[4644]: I0204 10:09:11.660240 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:09:11 crc kubenswrapper[4644]: E0204 10:09:11.660990 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:09:25 crc kubenswrapper[4644]: I0204 10:09:25.660255 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:09:25 crc kubenswrapper[4644]: E0204 10:09:25.661177 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:09:40 crc kubenswrapper[4644]: I0204 10:09:40.669794 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:09:40 crc kubenswrapper[4644]: E0204 10:09:40.670864 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:09:51 crc kubenswrapper[4644]: I0204 10:09:51.660073 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:09:51 crc kubenswrapper[4644]: E0204 10:09:51.660845 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:10:06 crc kubenswrapper[4644]: I0204 10:10:06.659790 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:10:06 crc kubenswrapper[4644]: E0204 10:10:06.661178 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:10:21 crc kubenswrapper[4644]: I0204 10:10:21.660217 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:10:21 crc kubenswrapper[4644]: E0204 10:10:21.661021 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:10:32 crc kubenswrapper[4644]: I0204 10:10:32.659911 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:10:32 crc kubenswrapper[4644]: E0204 10:10:32.660877 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:10:45 crc kubenswrapper[4644]: I0204 10:10:45.659879 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:10:45 crc kubenswrapper[4644]: E0204 10:10:45.660780 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:10:58 crc kubenswrapper[4644]: I0204 10:10:58.660627 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:10:58 crc kubenswrapper[4644]: E0204 10:10:58.661619 4644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwrck_openshift-machine-config-operator(c2a87f38-c8a0-4007-b926-1dafb84e7483)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" podUID="c2a87f38-c8a0-4007-b926-1dafb84e7483" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.840163 4644 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:05 crc kubenswrapper[4644]: E0204 10:11:05.841121 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="copy" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841136 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="copy" Feb 04 10:11:05 crc kubenswrapper[4644]: E0204 10:11:05.841163 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="extract-content" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841170 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="extract-content" Feb 04 10:11:05 crc kubenswrapper[4644]: E0204 10:11:05.841187 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="extract-utilities" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841195 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="extract-utilities" Feb 04 10:11:05 crc kubenswrapper[4644]: E0204 10:11:05.841207 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="gather" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841213 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="gather" Feb 04 10:11:05 crc kubenswrapper[4644]: E0204 10:11:05.841234 4644 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="registry-server" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841260 4644 state_mem.go:107] "Deleted CPUSet assignment" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="registry-server" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841483 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="d382e62f-fe7d-4185-aa65-039f4f652e17" containerName="registry-server" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841510 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="copy" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.841522 4644 memory_manager.go:354] "RemoveStaleState removing state" podUID="a256db3c-8355-4426-a515-28f0f8d3017c" containerName="gather" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.843053 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.865873 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.902666 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj29b\" (UniqueName: \"kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.902721 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:05 crc kubenswrapper[4644]: I0204 10:11:05.902770 4644 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.005677 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj29b\" (UniqueName: \"kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.005730 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.005784 4644 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.006303 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.006342 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.034632 4644 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj29b\" (UniqueName: \"kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b\") pod \"certified-operators-xhg2m\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.164487 4644 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:06 crc kubenswrapper[4644]: I0204 10:11:06.574570 4644 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:06 crc kubenswrapper[4644]: W0204 10:11:06.589249 4644 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af6ab82_1f7e_4336_bff2_6a93e8ada661.slice/crio-7244dc4febba4683f75a7b9df4027e0a23dca5350de1096b3eaff00791769097 WatchSource:0}: Error finding container 7244dc4febba4683f75a7b9df4027e0a23dca5350de1096b3eaff00791769097: Status 404 returned error can't find the container with id 7244dc4febba4683f75a7b9df4027e0a23dca5350de1096b3eaff00791769097 Feb 04 10:11:07 crc kubenswrapper[4644]: I0204 10:11:07.255420 4644 generic.go:334] "Generic (PLEG): container finished" podID="0af6ab82-1f7e-4336-bff2-6a93e8ada661" containerID="92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e" exitCode=0 Feb 04 10:11:07 crc kubenswrapper[4644]: I0204 10:11:07.255457 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerDied","Data":"92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e"} Feb 04 10:11:07 crc kubenswrapper[4644]: I0204 10:11:07.255747 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerStarted","Data":"7244dc4febba4683f75a7b9df4027e0a23dca5350de1096b3eaff00791769097"} Feb 04 10:11:09 crc kubenswrapper[4644]: I0204 10:11:09.659683 4644 scope.go:117] "RemoveContainer" containerID="460ee9f83317ee6e4fd2a74afd04166707410e508049073f7008952c3a835262" Feb 04 10:11:10 crc kubenswrapper[4644]: I0204 10:11:10.283559 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwrck" event={"ID":"c2a87f38-c8a0-4007-b926-1dafb84e7483","Type":"ContainerStarted","Data":"ea49d4be7b608eb5225e48e91be31713c32047e900b7711c182a079fb6b550cc"} Feb 04 10:11:11 crc kubenswrapper[4644]: I0204 10:11:11.306716 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerStarted","Data":"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3"} Feb 04 10:11:12 crc kubenswrapper[4644]: I0204 10:11:12.318115 4644 generic.go:334] "Generic (PLEG): container finished" podID="0af6ab82-1f7e-4336-bff2-6a93e8ada661" containerID="bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3" exitCode=0 Feb 04 10:11:12 crc kubenswrapper[4644]: I0204 10:11:12.318159 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerDied","Data":"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3"} Feb 04 10:11:13 crc kubenswrapper[4644]: I0204 10:11:13.329289 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerStarted","Data":"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942"} Feb 04 10:11:13 crc kubenswrapper[4644]: I0204 10:11:13.372952 4644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xhg2m" podStartSLOduration=2.926186026 podStartE2EDuration="8.372928781s" podCreationTimestamp="2026-02-04 10:11:05 +0000 UTC" firstStartedPulling="2026-02-04 10:11:07.257145679 +0000 UTC m=+5377.297203434" lastFinishedPulling="2026-02-04 10:11:12.703888254 +0000 UTC m=+5382.743946189" observedRunningTime="2026-02-04 10:11:13.354824145 +0000 UTC m=+5383.394881910" watchObservedRunningTime="2026-02-04 10:11:13.372928781 +0000 UTC m=+5383.412986536" Feb 04 10:11:16 crc kubenswrapper[4644]: I0204 10:11:16.165872 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:16 crc kubenswrapper[4644]: I0204 10:11:16.166444 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:16 crc kubenswrapper[4644]: I0204 10:11:16.210968 4644 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.223570 4644 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.284456 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.445992 4644 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xhg2m" podUID="0af6ab82-1f7e-4336-bff2-6a93e8ada661" containerName="registry-server" containerID="cri-o://56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942" gracePeriod=2 Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.874778 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.941028 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj29b\" (UniqueName: \"kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b\") pod \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.941353 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities\") pod \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.941747 4644 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content\") pod \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\" (UID: \"0af6ab82-1f7e-4336-bff2-6a93e8ada661\") " Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.942968 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities" (OuterVolumeSpecName: "utilities") pod "0af6ab82-1f7e-4336-bff2-6a93e8ada661" (UID: "0af6ab82-1f7e-4336-bff2-6a93e8ada661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.946905 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b" (OuterVolumeSpecName: "kube-api-access-sj29b") pod "0af6ab82-1f7e-4336-bff2-6a93e8ada661" (UID: "0af6ab82-1f7e-4336-bff2-6a93e8ada661"). InnerVolumeSpecName "kube-api-access-sj29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 10:11:26 crc kubenswrapper[4644]: I0204 10:11:26.993104 4644 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0af6ab82-1f7e-4336-bff2-6a93e8ada661" (UID: "0af6ab82-1f7e-4336-bff2-6a93e8ada661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.044298 4644 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.044363 4644 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af6ab82-1f7e-4336-bff2-6a93e8ada661-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.044379 4644 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj29b\" (UniqueName: \"kubernetes.io/projected/0af6ab82-1f7e-4336-bff2-6a93e8ada661-kube-api-access-sj29b\") on node \"crc\" DevicePath \"\"" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.456480 4644 generic.go:334] "Generic (PLEG): container finished" podID="0af6ab82-1f7e-4336-bff2-6a93e8ada661" containerID="56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942" exitCode=0 Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.456536 4644 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhg2m" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.456538 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerDied","Data":"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942"} Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.456966 4644 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhg2m" event={"ID":"0af6ab82-1f7e-4336-bff2-6a93e8ada661","Type":"ContainerDied","Data":"7244dc4febba4683f75a7b9df4027e0a23dca5350de1096b3eaff00791769097"} Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.456990 4644 scope.go:117] "RemoveContainer" containerID="56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.487694 4644 scope.go:117] "RemoveContainer" containerID="bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.500428 4644 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.509855 4644 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xhg2m"] Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.515742 4644 scope.go:117] "RemoveContainer" containerID="92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.555484 4644 scope.go:117] "RemoveContainer" containerID="56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942" Feb 04 10:11:27 crc kubenswrapper[4644]: E0204 10:11:27.555875 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942\": container with ID starting with 56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942 not found: ID does not exist" containerID="56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.555908 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942"} err="failed to get container status \"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942\": rpc error: code = NotFound desc = could not find container \"56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942\": container with ID starting with 56e5fb7fa1d72b124a8f88948f50600414be8576800e13540bdab957bae90942 not found: ID does not exist" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.555928 4644 scope.go:117] "RemoveContainer" containerID="bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3" Feb 04 10:11:27 crc kubenswrapper[4644]: E0204 10:11:27.556187 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3\": container with ID starting with bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3 not found: ID does not exist" containerID="bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.556215 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3"} err="failed to get container status \"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3\": rpc error: code = NotFound desc = could not find container \"bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3\": container with ID starting with bd397ed0e5c14686f72c8094ca635ded29cee98b488177bfb23004825acbdaa3 not found: ID does not exist" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.556277 4644 scope.go:117] "RemoveContainer" containerID="92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e" Feb 04 10:11:27 crc kubenswrapper[4644]: E0204 10:11:27.556564 4644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e\": container with ID starting with 92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e not found: ID does not exist" containerID="92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e" Feb 04 10:11:27 crc kubenswrapper[4644]: I0204 10:11:27.556588 4644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e"} err="failed to get container status \"92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e\": rpc error: code = NotFound desc = could not find container \"92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e\": container with ID starting with 92104e19bb745c06c9ad939ba4d60c26954cad68afd2ef573cb145a3851f485e not found: ID does not exist" Feb 04 10:11:28 crc kubenswrapper[4644]: I0204 10:11:28.672076 4644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af6ab82-1f7e-4336-bff2-6a93e8ada661" path="/var/lib/kubelet/pods/0af6ab82-1f7e-4336-bff2-6a93e8ada661/volumes"